[jira] [Commented] (IMPALA-8042) Better selectivity estimate for BETWEEN

2024-05-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849437#comment-17849437
 ] 

ASF subversion and git services commented on IMPALA-8042:
-

Commit d0237fbe47eb5089ee19a0a201045b862d65ecaa in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d0237fbe4 ]

IMPALA-8042: Assign BETWEEN selectivity for discrete-unique column

Impala frontend can not evaluate BETWEEN/NOT BETWEEN predicate directly.
It needs to transform a BetweenPredicate into a CompoundPredicate
consisting of upper bound and lower bound BinaryPredicate through
BetweenToCompoundRule.java. The BinaryPredicate can then be pushed down
or rewritten into other form by another expression rewrite rule.
However, the selectivity of BetweenPredicate or its derivatives remains
unassigned and often collapses with other unknown selectivity predicates
to have collective selectivity equals Expr.DEFAULT_SELECTIVITY (0.1).

This patch adds a narrow optimization of BetweenPredicate selectivity
when the following criteria are met:

1. The BetweenPredicate is bound to a slot reference of a single column
   of a table.
2. The column type is discrete, such as INTEGER or DATE.
3. The column stats are available.
4. The column is sufficiently unique based on available stats.
5. The BETWEEN/NOT BETWEEN predicate is in good form (lower bound value
   <= upper bound value).
6. The final calculated selectivity is less than or equal to
   Expr.DEFAULT_SELECTIVITY.

If these criteria are unmet, the Planner will revert to the old
behavior, which is letting the selectivity unassigned.

Since this patch only target BetweenPredicate over unique column, the
following query will still have the default scan selectivity (0.1):

select count(*) from tpch.customer c
where c.c_custkey >= 1234 and c.c_custkey <= 2345;

While this equivalent query written with BETWEEN predicate will have
lower scan selectivity:

select count(*) from tpch.customer c
where c.c_custkey between 1234 and 2345;

This patch calculates the BetweenPredicate selectivity during
transformation at BetweenToCompoundRule.java. The selectivity is
piggy-backed into the resulting CompoundPredicate and BinaryPredicate as
betweenSelectivity_ field, separate from the selectivity_ field.
Analyzer.getBoundPredicates() is modified to prioritize the derived
BinaryPredicate over ordinary BinaryPredicate in its return value to
prevent the derived BinaryPredicate from being eliminated by a matching
ordinary BinaryPredicate.

Testing:
- Add table functional_parquet.unique_with_nulls.
- Add FE tests in ExprCardinalityTest#testBetweenSelectivity,
  ExprCardinalityTest#testNotBetweenSelectivity, and
  PlannerTest#testScanCardinality.
- Pass core tests.

Change-Id: Ib349d97349d1ee99788645a66be1b81749684d10
Reviewed-on: http://gerrit.cloudera.org:8080/21377
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Better selectivity estimate for BETWEEN
> ---
>
> Key: IMPALA-8042
> URL: https://issues.apache.org/jira/browse/IMPALA-8042
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 3.1.0
>Reporter: Paul Rogers
>Assignee: Riza Suminto
>Priority: Minor
>
> The analyzer rewrites a BETWEEN expression into a pair of inequalities.  
> IMPALA-8037 explains that the planner then groups all such non-quality 
> conditions together and assigns a selectivity of 0.1. IMPALA-8031 explains 
> that the analyzer should handle inequalities better.
> BETWEEN is a special case and informs the final result. If we assume a 
> selectivity of s for inequality, then BETWEEN should be something like s/2. 
> The intuition is that if c >= x includes, say, ⅓ of values, and c <= y 
> includes a third of values, then c BETWEEN x AND y should be a narrower set 
> of values, say ⅙.
> [Ramakrishnan an 
> Gherke|http://pages.cs.wisc.edu/~dbbook/openAccess/Minibase/optimizer/costformula.html\
>  recommend 0.4 for between, 0.3 for inequality, and 0.3^2 = 0.09 for the 
> general expression x <= c AND c <= Y. Note the discrepancy between the 
> compound inequality case and the BETWEEN case, likely reflecting the 
> additional information we obtain when the user chooses to use BETWEEN.
> To implement a special BETWEEN selectivity in Impala, we must remember the 
> selectivity of BETWEEN during the rewrite to a compound inequality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13105) Multiple imported query profiles fail to import/clear at once

2024-05-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849436#comment-17849436
 ] 

ASF subversion and git services commented on IMPALA-13105:
--

Commit 8a6f2824b8abf53ea022ca571da33619d564a14a in impala's branch 
refs/heads/master from Surya Hebbar
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=8a6f2824b ]

IMPALA-13105: Fix multiple imported query profiles fail to import/clear at once

On importing multiple query profiles, insertion of the last query in the
queue fails as no delay is provided for the insertion.

This has been fixed by providing a delay after inserting the final query.

On clearing all the imported queries, in some instances page reloads
before clearing the IndexedDB object store.

This has been fixed by triggering the page reload after clearing
the object store succeeds.

Change-Id: I42470fecd0cff6e193f080102575e51d86a2d562
Reviewed-on: http://gerrit.cloudera.org:8080/21450
Reviewed-by: Wenzhe Zhou 
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Multiple imported query profiles fail to import/clear at once
> -
>
> Key: IMPALA-13105
> URL: https://issues.apache.org/jira/browse/IMPALA-13105
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Surya Hebbar
>Assignee: Surya Hebbar
>Priority: Major
>
> When multiple query profiles are chosen at once, the last query profile in 
> the insertion queue fails as the page reloads without providing a delay for 
> inserting it.
>  
> The same behavior is seen when clearing all the query profiles.
>  
> This is mostly seen in Chromium based browsers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13034) Add logs for slow HTTP requests dumping the profile

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849073#comment-17849073
 ] 

ASF subversion and git services commented on IMPALA-13034:
--

Commit b975165a0acfe37af302dd7c007360633df54917 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b975165a0 ]

IMPALA-13034: Add logs and counters for HTTP profile requests blocking client 
fetches

There are several endpoints in WebUI that can dump a query profile:
/query_profile, /query_profile_encoded, /query_profile_plain_text,
/query_profile_json. The HTTP handler thread goes into
ImpalaServer::GetRuntimeProfileOutput() which acquires lock of the
ClientRequestState. This could block client requests in fetching query
results.

To help identify this issue, this patch adds warning logs when such
profile dumping requests run slow and the query is still in-flight. Also
adds a profile counter, GetInFlightProfileTimeStats, for the summary
stats of this time. Dumping the profiles after the query is archived
(e.g. closed) won't be tracked.

Logs for slow http responses are also added. The thresholds are defined
by two new flags, slow_profile_dump_warning_threshold_ms, and
slow_http_response_warning_threshold_ms.

Note that dumping the profile in-flight won't always block the query,
e.g. if there are no client fetch requests or if the coordinator
fragment is idle waiting for executor fragment instances. So a long time
shown in GetInFlightProfileTimeStats doesn't mean it's hitting the
issue.

To better identify this issue, this patch adds another profile counter,
ClientFetchLockWaitTimer, as the cumulative time client fetch requests
waiting for locks.

Also fixes false positive logs for complaining invalid query handles.
Such logs are added in GetQueryHandle() when the query is not found in
the active query map, but it could still exist in the query log. This
removes the logs in GetQueryHandle() and lets the callers decide whether
to log the error.

Tests:
 - Added e2e test
 - Ran CORE tests

Change-Id: I538ebe914f70f460bc8412770a8f7a1cc8b505dc
Reviewed-on: http://gerrit.cloudera.org:8080/21412
Reviewed-by: Impala Public Jenkins 
Tested-by: Michael Smith 


> Add logs for slow HTTP requests dumping the profile
> ---
>
> Key: IMPALA-13034
> URL: https://issues.apache.org/jira/browse/IMPALA-13034
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> There are several endpoints in WebUI that can dump a query profile: 
> /query_profile, /query_profile_encoded, /query_profile_plain_text, 
> /query_profile_json
> The HTTP handler thread goes into ImpalaServer::GetRuntimeProfileOutput() 
> which acquires lock of the ClientRequestState. This could blocks client 
> requests in fetching query results. We should add warning logs when such HTTP 
> requests run slow (e.g. when the profile is too large to download in a short 
> time). IP address and other info of such requests should also be logged.
> Related codes:
> https://github.com/apache/impala/blob/f620e5d5c0bbdb0fd97bac31c7b7439cd13c6d08/be/src/service/impala-server.cc#L736
> https://github.com/apache/impala/blob/f620e5d5c0bbdb0fd97bac31c7b7439cd13c6d08/be/src/service/impala-beeswax-server.cc#L601
> https://github.com/apache/impala/blob/f620e5d5c0bbdb0fd97bac31c7b7439cd13c6d08/be/src/service/impala-hs2-server.cc#L207



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13102) Loading tables with illegal stats failed

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849072#comment-17849072
 ] 

ASF subversion and git services commented on IMPALA-13102:
--

Commit e35f8183cb1ba069ae00ee93e71451eccd505d0a in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e35f8183c ]

IMPALA-13102: Normalize invalid column stats from HMS

Column stats like numDVs, numNulls in HMS could have arbitrary values.
Impala expects them to be non-negative or -1 for unknown. So loading
tables with invalid stats values (<-1) will fail.

This patch adds logic to normalize the stats values. If the value < -1,
use -1 for it and add corresponding warning logs. Also refactor some
redundant codes in ColumnStats.

Tests:
 - Add e2e test

Change-Id: If6216e3d6e73a529a9b3a8c0ea9d22727ab43f1a
Reviewed-on: http://gerrit.cloudera.org:8080/21445
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Loading tables with illegal stats failed
> 
>
> Key: IMPALA-13102
> URL: https://issues.apache.org/jira/browse/IMPALA-13102
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> When the table has illegal stats, e.g. numDVs=-100, Impala can't load the 
> table. So DROP STATS or DROP TABLE can't be perform on the table.
> {code:sql}
> [localhost:21050] default> drop stats alltypes_bak;
> Query: drop stats alltypes_bak
> ERROR: AnalysisException: Failed to load metadata for table: 'alltypes_bak'
> CAUSED BY: TableLoadingException: Failed to load metadata for table: 
> default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}{code}
> We should allow at least dropping the stats or dropping the table. So user 
> can use Impala to recover the stats.
> Stacktrace in the logs:
> {noformat}
> I0520 08:00:56.661746 17543 jni-util.cc:321] 
> 5343142d1173494f:44dcde8c] 
> org.apache.impala.common.AnalysisException: Failed to load metadata for 
> table: 'alltypes_bak'
> at 
> org.apache.impala.analysis.Analyzer.resolveTableRef(Analyzer.java:974)
> at 
> org.apache.impala.analysis.DropStatsStmt.analyze(DropStatsStmt.java:94)
> at 
> org.apache.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:551)
> at 
> org.apache.impala.analysis.AnalysisContext.analyzeAndAuthorize(AnalysisContext.java:498)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2542)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2224)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1985)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:175)
> Caused by: org.apache.impala.catalog.TableLoadingException: Failed to load 
> metadata for table: default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}
> at 
> org.apache.impala.catalog.IncompleteTable.loadFromThrift(IncompleteTable.java:162)
> at org.apache.impala.catalog.Table.fromThrift(Table.java:586)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addTable(ImpaladCatalog.java:479)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addCatalogObject(ImpaladCatalog.java:334)
> at 
> org.apache.impala.catalog.ImpaladCatalog.updateCatalog(ImpaladCatalog.java:262)
> at 
> org.apache.impala.service.FeCatalogManager$CatalogdImpl.updateCatalogCache(FeCatalogManager.java:114)
> at 
> org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:585)
> at 
> org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:196)
> at .: 
> org.apache.impala.catalog.TableLoadingException: Failed to load metadata for 
> table: default.alltypes_bak
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1318)
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1213)
> at org.apache.impala.catalog.TableLoader.load(TableLoader.java:145)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:251)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:247)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> 

[jira] [Commented] (IMPALA-11735) Handle CREATE_TABLE event when the db is invisible to the impala server user

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848916#comment-17848916
 ] 

ASF subversion and git services commented on IMPALA-11735:
--

Commit 9672312015be959360795a8af0843fdf386b557c in impala's branch 
refs/heads/master from Sai Hemanth Gantasala
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=967231201 ]

IMPALA-11735: Handle CREATE_TABLE event when the db is invisible to the
impala server user

It's possible that some dbs are invisible to Impala cluster due to
authorization restrictions. However, the CREATE_TABLE events in such
dbs will lead the event-processor into ERROR state. Event processor
should ignore such CREAT_TABLE events when database is not found.

note: This is an incorrect setup, where 'impala' super user is denied
access on the metadata object database but given access to fetch events
from notification log table of metastore.

Testing:
- Manually verified this on local cluster.
- Added automated unit test to verify the same.

Change-Id: I90275bb8c065fc5af61186901ac7e9839a68c43b
Reviewed-on: http://gerrit.cloudera.org:8080/21188
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Handle CREATE_TABLE event when the db is invisible to the impala server user
> 
>
> Key: IMPALA-11735
> URL: https://issues.apache.org/jira/browse/IMPALA-11735
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Quanlong Huang
>Assignee: Sai Hemanth Gantasala
>Priority: Critical
>
> It's possible that some dbs are invisible to Impala cluster due to 
> authorization restrictions. However, the CREATE_TABLE events in such dbs will 
> lead the event-processor into ERROR state:
> {noformat}
> E1026 03:02:30.650302 116774 MetastoreEventsProcessor.java:684] Unexpected 
> exception received while processing event
> Java exception follows:
> org.apache.impala.catalog.events.MetastoreNotificationException: EventId: 
> 184240416 EventType: CREATE_TABLE Unable to process event
> at 
> org.apache.impala.catalog.events.MetastoreEvents$CreateTableEvent.process(MetastoreEvents.java:735)
> at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.processIfEnabled(MetastoreEvents.java:345)
> at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:772)
> at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:670)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> E1026 03:02:30.650447 116774 MetastoreEventsProcessor.java:795] Notification 
> event is null
> {noformat}
> It should be handled (e.g. ignored) and reported to the admin (e.g. in logs).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13083) Clarify REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION error message

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848917#comment-17848917
 ] 

ASF subversion and git services commented on IMPALA-13083:
--

Commit 98739a84557a209e05694abd79f62f7f7daf8777 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=98739a845 ]

IMPALA-13083: Clarify REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION

This patch improves REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION error
message by saying the specific configuration that must be adjusted such
that the query can pass the Admission Control. New fields
'per_backend_mem_to_admit_source' and
'coord_backend_mem_to_admit_source' of type MemLimitSourcePB are added
into QuerySchedulePB. These fields explain what limiting factor drives
final numbers at 'per_backend_mem_to_admit' and
'coord_backend_mem_to_admit' respectively. In turn, Admission Control
will use this information to compose a more informative error message
that the user can act upon. The new error message pattern also
explicitly mentions "Per Host Min Memory Reservation" as a place to look
at to investigate memory reservations scheduled for each backend node.

Updated documentation with examples of query rejection by Admission
Control and how to read the error message.

Testing:
- Add BE tests at admission-controller-test.cc
- Adjust and pass affected EE tests

Change-Id: I1ef7fb7e7a194b2036c2948639a06c392590bf66
Reviewed-on: http://gerrit.cloudera.org:8080/21436
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Clarify REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION error message
> --
>
> Key: IMPALA-13083
> URL: https://issues.apache.org/jira/browse/IMPALA-13083
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Distributed Exec
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>
> REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION error message is too vague for 
> user/administrator to make necessary adjustment to run query that is rejected 
> by admission-controller.
> {code:java}
> const string REASON_MEM_LIMIT_TOO_LOW_FOR_RESERVATION =
> "minimum memory reservation is greater than memory available to the query 
> for buffer "
> "reservations. Memory reservation needed given the current plan: $0. 
> Adjust either "
> "the mem_limit or the pool config (max-query-mem-limit, 
> min-query-mem-limit) for the "
> "query to allow the query memory limit to be at least $1. Note that 
> changing the "
> "mem_limit may also change the plan. See the query profile for more 
> information "
> "about the per-node memory requirements.";
> {code}
> There are many config and options that directly and indirectly clamp 
> schedule.per_backend_mem_limit() and schedule.per_backend_mem_to_admit().
> [https://github.com/apache/impala/blob/3b35ddc8ca7b0e540fc16c413a170a25e164462b/be/src/scheduling/schedule-state.cc#L262-L361]
> Ideally, this error message should clearly mention which query option / llama 
> config / backend flag that influence per_backend_mem_limit decision so that 
> user can make directly make adjustment on that config. It should also clearly 
> mention 'Per Host Min Memory Reservation' info string at query profile 
> instead of just 'per-node memory requirements'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13040) SIGSEGV in QueryState::UpdateFilterFromRemote

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848461#comment-17848461
 ] 

ASF subversion and git services commented on IMPALA-13040:
--

Commit aa01079478773aed28c9a4d8b07c062202de698d in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=aa0107947 ]

IMPALA-13040: (addendum) Inject larger delay for sanitized build

TestLateQueryStateInit has been flaky in sanitized build because the
largest delay injection time is fixed at 3 seconds. This patch fixes
the issue by setting largest delay injection time equal to
RUNTIME_FILTER_WAIT_TIME_MS, which is 3 second for regular build and 10
seconds for sanitized build.

Testing:
- Loop and pass test_runtime_filter_aggregation.py 10 times in ASAN
  build and 50 times in UBSAN build.

Change-Id: I09e5ae4646f53632e9a9f519d370a33a5534df19
Reviewed-on: http://gerrit.cloudera.org:8080/21439
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> SIGSEGV in  QueryState::UpdateFilterFromRemote
> --
>
> Key: IMPALA-13040
> URL: https://issues.apache.org/jira/browse/IMPALA-13040
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Csaba Ringhofer
>Assignee: Riza Suminto
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> {code}
> Crash reason:  SIGSEGV /SEGV_MAPERR
> Crash address: 0x48
> Process uptime: not available
> Thread 114 (crashed)
>  0  libpthread.so.0 + 0x9d00
> rax = 0x00019e57ad00   rdx = 0x2a656720
> rcx = 0x059a9860   rbx = 0x
> rsi = 0x00019e57ad00   rdi = 0x0038
> rbp = 0x7f6233d544e0   rsp = 0x7f6233d544a8
>  r8 = 0x06a53540r9 = 0x0039
> r10 = 0x   r11 = 0x000a
> r12 = 0x00019e57ad00   r13 = 0x7f62a2f997d0
> r14 = 0x7f6233d544f8   r15 = 0x1632c0f0
> rip = 0x7f62a2f96d00
> Found by: given as instruction pointer in context
>  1  
> impalad!impala::QueryState::UpdateFilterFromRemote(impala::UpdateFilterParamsPB
>  const&, kudu::rpc::RpcContext*) [query-state.cc : 1033 + 0x5]
> rbp = 0x7f6233d54520   rsp = 0x7f6233d544f0
> rip = 0x015c0837
> Found by: previous frame's frame pointer
>  2  
> impalad!impala::DataStreamService::UpdateFilterFromRemote(impala::UpdateFilterParamsPB
>  const*, impala::UpdateFilterResultPB*, kudu::rpc::RpcContext*) 
> [data-stream-service.cc : 134 + 0xb]
> rbp = 0x7f6233d54640   rsp = 0x7f6233d54530
> rip = 0x017c05de
> Found by: previous frame's frame pointer
> {code}
> The line that crashes is 
> https://github.com/apache/impala/blob/b39cd79ae84c415e0aebec2c2b4d7690d2a0cc7a/be/src/runtime/query-state.cc#L1033
> My guess is that inside the actual segfault is within WaitForPrepare() but it 
> was inlined. Not sure if a remote filter can arrive even before 
> QueryState::Init is finished - that would explain the issue, as 
> instances_prepared_barrier_ is not yet created at that point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12800) Queries with many nested inline views see performance issues with ExprSubstitutionMap

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848462#comment-17848462
 ] 

ASF subversion and git services commented on IMPALA-12800:
--

Commit ae6846b1cd039b2cd6f8753ce3ff810c5b2d3ce3 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ae6846b1c ]

IMPALA-12800: Skip O(n^2) ExprSubstitutionMap::verify() for release builds

ExprSubstitutionMap::compose() and combine() call verify() to
check the new ExprSubstitutionMap for duplicates. This algorithm
is O(n^2) and can add significant overhead to SQLs with a large
number of expressions or inline views. This changes verify() to
skip the check for release builds (keeping it for debug builds).

In a query with 20+ layers of inline views and thousands of
expressions, turning off the verify() call cuts the execution
time from 51 minutes to 18 minutes.

This doesn't fully solve slowness in ExprSubstitutionMap.
Further improvement would require Expr to support hash-based
algorithms, which is a much larger change.

Testing:
 - Manual performance comparison with/without the verify() call

Change-Id: Ieeacfec6a5b487076ce5b19747319630616411f0
Reviewed-on: http://gerrit.cloudera.org:8080/21444
Reviewed-by: Joe McDonnell 
Tested-by: Impala Public Jenkins 


> Queries with many nested inline views see performance issues with 
> ExprSubstitutionMap
> -
>
> Key: IMPALA-12800
> URL: https://issues.apache.org/jira/browse/IMPALA-12800
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 4.3.0
>Reporter: Joe McDonnell
>Priority: Critical
> Attachments: impala12800repro.sql, impala12800schema.sql, 
> long_query_jstacks.tar.gz
>
>
> A user running a query with many layers of inline views saw a large amount of 
> time spent in analysis. 
>  
> {noformat}
> - Authorization finished (ranger): 7s518ms (13.134ms)
> - Value transfer graph computed: 7s760ms (241.953ms)
> - Single node plan created: 2m47s (2m39s)
> - Distributed plan created: 2m47s (7.430ms)
> - Lineage info computed: 2m47s (39.017ms)
> - Planning finished: 2m47s (672.518ms){noformat}
> In reproducing it locally, we found that most of the stacks end up in 
> ExprSubstitutionMap.
>  
> Here are the main stacks seen while running jstack every 3 seconds during a 
> 75 second execution:
> Location 1: (ExprSubstitutionMap::compose -> contains -> indexOf -> Expr 
> equals) (4 samples)
> {noformat}
>    java.lang.Thread.State: RUNNABLE
>     at org.apache.impala.analysis.Expr.equals(Expr.java:1008)
>     at java.util.ArrayList.indexOf(ArrayList.java:323)
>     at java.util.ArrayList.contains(ArrayList.java:306)
>     at 
> org.apache.impala.analysis.ExprSubstitutionMap.compose(ExprSubstitutionMap.java:120){noformat}
> Location 2:  (ExprSubstitutionMap::compose -> verify -> Expr equals) (9 
> samples)
> {noformat}
>    java.lang.Thread.State: RUNNABLE
>     at org.apache.impala.analysis.Expr.equals(Expr.java:1008)
>     at 
> org.apache.impala.analysis.ExprSubstitutionMap.verify(ExprSubstitutionMap.java:173)
>     at 
> org.apache.impala.analysis.ExprSubstitutionMap.compose(ExprSubstitutionMap.java:126){noformat}
> Location 3: (ExprSubstitutionMap::combine -> verify -> Expr equals) (5 
> samples)
> {noformat}
>    java.lang.Thread.State: RUNNABLE
>     at org.apache.impala.analysis.Expr.equals(Expr.java:1008)
>     at 
> org.apache.impala.analysis.ExprSubstitutionMap.verify(ExprSubstitutionMap.java:173)
>     at 
> org.apache.impala.analysis.ExprSubstitutionMap.combine(ExprSubstitutionMap.java:143){noformat}
> Location 4:  (TupleIsNullPredicate.wrapExprs ->  Analyzer.isTrueWithNullSlots 
> -> FeSupport.EvalPredicate -> Thrift serialization) (4 samples)
> {noformat}
>    java.lang.Thread.State: RUNNABLE
>     at java.lang.StringCoding.encode(StringCoding.java:364)
>     at java.lang.String.getBytes(String.java:941)
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.writeString(TBinaryProtocol.java:227)
>     at 
> org.apache.impala.thrift.TClientRequest$TClientRequestStandardScheme.write(TClientRequest.java:532)
>     at 
> org.apache.impala.thrift.TClientRequest$TClientRequestStandardScheme.write(TClientRequest.java:467)
>     at org.apache.impala.thrift.TClientRequest.write(TClientRequest.java:394)
>     at 
> org.apache.impala.thrift.TQueryCtx$TQueryCtxStandardScheme.write(TQueryCtx.java:3034)
>     at 
> org.apache.impala.thrift.TQueryCtx$TQueryCtxStandardScheme.write(TQueryCtx.java:2709)
>     at org.apache.impala.thrift.TQueryCtx.write(TQueryCtx.java:2400)
>     at org.apache.thrift.TSerializer.serialize(TSerializer.java:84)
>     at 
> 

[jira] [Commented] (IMPALA-13079) Add support for FLOAT/DOUBLE in Iceberg metadata tables

2024-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848164#comment-17848164
 ] 

ASF subversion and git services commented on IMPALA-13079:
--

Commit e5fdcb4f4b7e2f37e5f7bb357eede8092de8f429 in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e5fdcb4f4 ]

IMPALA-13091: query_test.test_iceberg.TestIcebergV2Table.test_metadata_tables 
fails on an expected constant

IMPALA-13079 added a test in iceberg-metadata-tables.test that included
assertions about values that can change across builds, e.g. file sizes,
which caused test failures.

This commit fixes it by doing two things:
1. narrowing down the result set of the query to the column that the
   test is really about - this removes some of the problematic values
2. using regexes for the remaining problematic values.

Change-Id: Ic056079eed87a68afa95cd111ce2037314cd9620
Reviewed-on: http://gerrit.cloudera.org:8080/21440
Tested-by: Impala Public Jenkins 
Reviewed-by: Riza Suminto 


> Add support for FLOAT/DOUBLE in Iceberg metadata tables
> ---
>
> Key: IMPALA-13079
> URL: https://issues.apache.org/jira/browse/IMPALA-13079
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Daniel Becker
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13091) query_test.test_iceberg.TestIcebergV2Table.test_metadata_tables fails on an expected constant

2024-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848163#comment-17848163
 ] 

ASF subversion and git services commented on IMPALA-13091:
--

Commit e5fdcb4f4b7e2f37e5f7bb357eede8092de8f429 in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e5fdcb4f4 ]

IMPALA-13091: query_test.test_iceberg.TestIcebergV2Table.test_metadata_tables 
fails on an expected constant

IMPALA-13079 added a test in iceberg-metadata-tables.test that included
assertions about values that can change across builds, e.g. file sizes,
which caused test failures.

This commit fixes it by doing two things:
1. narrowing down the result set of the query to the column that the
   test is really about - this removes some of the problematic values
2. using regexes for the remaining problematic values.

Change-Id: Ic056079eed87a68afa95cd111ce2037314cd9620
Reviewed-on: http://gerrit.cloudera.org:8080/21440
Tested-by: Impala Public Jenkins 
Reviewed-by: Riza Suminto 


> query_test.test_iceberg.TestIcebergV2Table.test_metadata_tables fails on an 
> expected constant
> -
>
> Key: IMPALA-13091
> URL: https://issues.apache.org/jira/browse/IMPALA-13091
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.5.0
>Reporter: Laszlo Gaal
>Assignee: Daniel Becker
>Priority: Critical
>  Labels: impala-iceberg
>
> This fails in various sanitizer builds (ASAN, UBSAN):
> Failure report:{code}
> query_test/test_iceberg.py:1527: in test_metadata_tables
> '$OVERWRITE_SNAPSHOT_TS': str(overwrite_snapshot_ts.data[0])})
> common/impala_test_suite.py:820: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:627: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:520: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:313: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> 0,regex:'.*\.parquet','PARQUET',0,3,3648,'{1:32,2:63,3:71,4:43,5:55,6:47,7:39,8:58,9:47,13:63,14:96,15:75,16:78}','{1:3,2:3,3:3,4:3,5:3,6:3,7:3,8:3,9:3,13:3,14:6,15:6,16:6}','{1:1,2:0,3:0,4:0,5:0,6:1,7:1,8:1,9:1,13:0,14:0,15:0,16:0}','{16:0,4:1,5:1,14:0}','{1:"AA==",2:"AQ==",3:"9v8=",4:"/+ZbLw==",5:"MAWO5C7/O6s=",6:"AFgLImsYBgA=",7:"kU0AAA==",8:"QSBzdHJpbmc=",9:"YmluMQ==",13:"av///w==",14:"fcOUJa1JwtQ=",16:"Pw=="}','{1:"AQ==",2:"BQ==",3:"lgA=",4:"qV/jWA==",5:"fcOUJa1JwlQ=",6:"AMhZw6A3BgA=",7:"Hk8AAA==",8:"U29tZSBzdHJpbmc=",9:"YmluMg==",13:"Cg==",14:"NEA=",16:"AAB6RA=="}','NULL','[4]','NULL',0,'{"arr.element":{"column_size":96,"value_count":6,"null_value_count":0,"nan_value_count":0,"lower_bound":-2e+100,"upper_bound":20},"b":{"column_size":32,"value_count":3,"null_value_count":1,"nan_value_count":null,"lower_bound":false,"upper_bound":true},"bn":{"column_size":47,"value_count":3,"null_value_count":1,"nan_value_count":null,"lower_bound":"YmluMQ==","upper_bound":"YmluMg=="},"d":{"column_size":55,"value_count":3,"null_value_count":0,"nan_value_count":1,"lower_bound":-2e-100,"upper_bound":2e+100},"dt":{"column_size":39,"value_count":3,"null_value_count":1,"nan_value_count":null,"lower_bound":"2024-05-14","upper_bound":"2025-06-15"},"f":{"column_size":43,"value_count":3,"null_value_count":0,"nan_value_count":1,"lower_bound":2.00026702864e-10,"upper_bound":199973982208},"i":{"column_size":63,"value_count":3,"null_value_count":0,"nan_value_count":null,"lower_bound":1,"upper_bound":5},"l":{"column_size":71,"value_count":3,"null_value_count":0,"nan_value_count":null,"lower_bound":-10,"upper_bound":150},"mp.key":{"column_size":75,"value_count":6,"null_value_count":0,"nan_value_count":null,"lower_bound":null,"upper_bound":null},"mp.value":{"column_size":78,"value_count":6,"null_value_count":0,"nan_value_count":0,"lower_bound":0.5,"upper_bound":1000},"s":{"column_size":58,"value_count":3,"null_value_count":1,"nan_value_count":null,"lower_bound":"A
>  string","upper_bound":"Some 
> string"},"strct.i":{"column_size":63,"value_count":3,"null_value_count":0,"nan_value_count":null,"lower_bound":-150,"upper_bound":10},"ts":{"column_size":47,"value_count":3,"null_value_count":1,"nan_value_count":null,"lower_bound":"2024-05-14
>  14:51:12","upper_bound":"2025-06-15 18:51:12"}}' != 
> 

[jira] [Commented] (IMPALA-12362) Improve Linux packaging support.

2024-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848162#comment-17848162
 ] 

ASF subversion and git services commented on IMPALA-12362:
--

Commit a5e5aa16d887faedee4eea1bc809fba41d758f5b in impala's branch 
refs/heads/branch-3.4.2 from Xiang Yang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=a5e5aa16d ]

IMPALA-12362: (part-4/4) Refactor linux packaging related cmake files.

Independent linux packaging related content to package/CMakeLists.txt
to make it more clearly.

This patch also add LICENSE and NOTICE file in the final package.

Testing:
 - Manually deploy package on Ubuntu22.04 and verify it.

Backport note for 3.4.x:
 - Resolved conflicts in CMakeLists.txt and modified
   package/CMakeLists.txt accordingly.

Change-Id: If3914dcda69f81a735cdf70d76c59fa09454777b
Reviewed-on: http://gerrit.cloudera.org:8080/20263
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
Reviewed-on: http://gerrit.cloudera.org:8080/21410
Reviewed-by: Xiang Yang 
Reviewed-by: Zihao Ye 
Tested-by: Quanlong Huang 


> Improve Linux packaging support.
> 
>
> Key: IMPALA-12362
> URL: https://issues.apache.org/jira/browse/IMPALA-12362
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: XiangYang
>Assignee: XiangYang
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> including:
> (part-1/4) Refactor service management scripts.
> (part-2/4) Optimize default configurations for packaging module.
> (part-3/4) Add admissiond service and impala-profile-tool to packaging module.
> (part-4/4) Refactor linux packaging related cmake files, add LICENSE and 
> NOTICE files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13020) catalog-topic updates >2GB do not work due to Thrift's max message size

2024-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847651#comment-17847651
 ] 

ASF subversion and git services commented on IMPALA-13020:
--

Commit c8415513158842e2ddb1d64891298d76fb0b367f in impala's branch 
refs/heads/branch-4.4.0 from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=c84155131 ]

IMPALA-13020 (part 1): Change thrift_rpc_max_message_size to int64_t

Thrift 0.16.0 introduced a max message size to protect
receivers against a malicious message allocating large
amounts of memory. That limit is a 32-bit signed integer,
so the max value is 2GB. Impala introduced the
thrift_rpc_max_message_size startup option to set that
for Impala's thrift servers.

There are times when Impala wants to send a message that
is larger than 2GB. In particular, the catalog-update
topic for the statestore can exceed 2GBs when there is
a lot of metadata loaded using the old v1 catalog. When
there is a 2GB max message size, the statestore can create
and send a >2GB message, but the impalads will reject
it. This can lead to impalads having stale metadata.

This switches to a patched Thrift that uses an int64_t
for the max message size for C++ code. It does not modify
the limit.

The MaxMessageSize error was being swallowed in TAcceptQueueServer.cpp,
so this fixes that location to always print MaxMessageSize
exceptions.

This is only patching the Thrift C++ library. It does not
patch the Thrift Java library. There are a few reasons for
that:
 - This specific issue involves C++ to C++ communication and
   will be solved by patching the C++ library.
 - C++ is easy to patch as it is built via the native-toolchain.
   There is no corresponding mechanism for patching our Java
   dependencies (though one could be developed).
 - Java modifications have implications for other dependencies
   like Hive which use Thrift to communicate with HMS.
For the Java code that uses max message size, this converts
the 64-bit value to 32-bit value by capping the value at
Integer.MAX_VALUE.

Testing:
 - Added enough tables to produce a >2GB catalog-topic and
   restarted an impalad with a higher limit specific. Without
   the patch, the catalog-topic update would be rejected by the
   impalad. With the patch, it succeeds.

Change-Id: I681b1849cc565dcb25de8c070c18776ce69cbb87
Reviewed-on: http://gerrit.cloudera.org:8080/21367
Reviewed-by: Michael Smith 
Reviewed-by: Joe McDonnell 
Tested-by: Joe McDonnell 
(cherry picked from commit 13df8239d82a61afc3196295a7878ca2ffe91873)


> catalog-topic updates >2GB do not work due to Thrift's max message size
> ---
>
> Key: IMPALA-13020
> URL: https://issues.apache.org/jira/browse/IMPALA-13020
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.2.0, Impala 4.3.0
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> Thrift 0.16.0 added a max message size to protect against malicious packets 
> that can consume a large amount of memory on the receiver side. This max 
> message size is a signed 32-bit integer, so it maxes out at 2GB (which we set 
> via thrift_rpc_max_message_size).
> In catalog v1, the catalog-update statestore topic can become larger than 2GB 
> when there are a large number of tables / partitions / files. If this happens 
> and an Impala coordinator needs to start up (or needs a full topic update for 
> any other reason), it is expecting the statestore to send it the full topic 
> update, but the coordinator actually can't process the message. The 
> deserialization of the message hits the 2GB max message size limit and fails.
> On the statestore side, it shows this message:
> {noformat}
> I0418 16:54:51.727290 3844140 statestore.cc:507] Preparing initial 
> catalog-update topic update for 
> impa...@mcdonnellthrift.vpc.cloudera.com:27000. Size = 2.27 GB
> I0418 16:54:53.889446 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889488 3844140 client-cache.cc:82] ReopenClient(): re-creating 
> client for mcdonnellthrift.vpc.cloudera.com:23000
> I0418 16:54:53.889493 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889503 3844140 thrift-client.cc:116] Error closing connection 
> to: mcdonnellthrift.vpc.cloudera.com:23000, ignoring (write() send(): Broken 
> pipe)
> I0418 16:54:56.052882 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:56.052932 3844140 client-cache.h:363] RPC Error: Client for 
> mcdonnellthrift.vpc.cloudera.com:23000 hit an unexpected exception: write() 
> send(): Broken pipe, type: N6apache6thrift9transport19TTransportExceptionE, 
> 

[jira] [Commented] (IMPALA-13020) catalog-topic updates >2GB do not work due to Thrift's max message size

2024-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847652#comment-17847652
 ] 

ASF subversion and git services commented on IMPALA-13020:
--

Commit c9745fd5b941f52b3cd3496c425722fcbbffe894 in impala's branch 
refs/heads/branch-4.4.0 from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=c9745fd5b ]

IMPALA-13020 (part 2): Split out external vs internal Thrift max message size

The Thrift max message size is designed to protect against malicious
messages that consume a lot of memory on the receiver. This is an
important security measure for externally facing services, but it
can interfere with internal communication within the cluster.
Currently, the max message size is controlled by a single startup
flag for both. This puts tensions between having a low value to
protect against malicious messages versus having a high value to
avoid issues with internal communication (e.g. large statestore
updates).

This introduces a new flag thrift_external_rpc_max_message_size to
specify the limit for externally-facing services. The current
thrift_rpc_max_message_size now applies only for internal services.
Splitting them apart allows setting a much higher value for
internal services (64GB) while leaving the externally facing services
using the current 2GB limit.

This modifies various code locations that wrap a Thrift transport to
pass in the original transport's TConfiguration. This also adds DCHECKs
to make sure that the new transport inherits the max message size. This
limits the locations where we actually need to set max message size.

ThriftServer/ThriftServerBuilder have a setting "is_external_facing"
which can be specified on each ThriftServer. This modifies statestore
and catalog to set is_external_facing to false. All other servers stay
with the default of true.

Testing:
 - This adds a test case to verify that is_external_facing uses the
   higher limit.
 - Ran through the steps in testdata/scale_test_metadata/README.md
   and updated the value in that doc.
 - Created many tables to push the catalog-update topic to be >2GB
   and verified that statestore successfully sends it when an impalad
   restarts.

Change-Id: Ib9a649ef49a8a99c7bd9a1b73c37c4c621661311
Reviewed-on: http://gerrit.cloudera.org:8080/21420
Tested-by: Impala Public Jenkins 
Reviewed-by: Riza Suminto 
Reviewed-by: Michael Smith 
(cherry picked from commit bcff4df6194b2f192d937bb9c031721feccb69df)


> catalog-topic updates >2GB do not work due to Thrift's max message size
> ---
>
> Key: IMPALA-13020
> URL: https://issues.apache.org/jira/browse/IMPALA-13020
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.2.0, Impala 4.3.0
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> Thrift 0.16.0 added a max message size to protect against malicious packets 
> that can consume a large amount of memory on the receiver side. This max 
> message size is a signed 32-bit integer, so it maxes out at 2GB (which we set 
> via thrift_rpc_max_message_size).
> In catalog v1, the catalog-update statestore topic can become larger than 2GB 
> when there are a large number of tables / partitions / files. If this happens 
> and an Impala coordinator needs to start up (or needs a full topic update for 
> any other reason), it is expecting the statestore to send it the full topic 
> update, but the coordinator actually can't process the message. The 
> deserialization of the message hits the 2GB max message size limit and fails.
> On the statestore side, it shows this message:
> {noformat}
> I0418 16:54:51.727290 3844140 statestore.cc:507] Preparing initial 
> catalog-update topic update for 
> impa...@mcdonnellthrift.vpc.cloudera.com:27000. Size = 2.27 GB
> I0418 16:54:53.889446 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889488 3844140 client-cache.cc:82] ReopenClient(): re-creating 
> client for mcdonnellthrift.vpc.cloudera.com:23000
> I0418 16:54:53.889493 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889503 3844140 thrift-client.cc:116] Error closing connection 
> to: mcdonnellthrift.vpc.cloudera.com:23000, ignoring (write() send(): Broken 
> pipe)
> I0418 16:54:56.052882 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:56.052932 3844140 client-cache.h:363] RPC Error: Client for 
> mcdonnellthrift.vpc.cloudera.com:23000 hit an unexpected exception: write() 
> send(): Broken pipe, type: N6apache6thrift9transport19TTransportExceptionE, 
> rpc: N6impala20TUpdateStateResponseE, send: not done
> I0418 16:54:56.052937 

[jira] [Commented] (IMPALA-13020) catalog-topic updates >2GB do not work due to Thrift's max message size

2024-05-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847419#comment-17847419
 ] 

ASF subversion and git services commented on IMPALA-13020:
--

Commit 13df8239d82a61afc3196295a7878ca2ffe91873 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=13df8239d ]

IMPALA-13020 (part 1): Change thrift_rpc_max_message_size to int64_t

Thrift 0.16.0 introduced a max message size to protect
receivers against a malicious message allocating large
amounts of memory. That limit is a 32-bit signed integer,
so the max value is 2GB. Impala introduced the
thrift_rpc_max_message_size startup option to set that
for Impala's thrift servers.

There are times when Impala wants to send a message that
is larger than 2GB. In particular, the catalog-update
topic for the statestore can exceed 2GBs when there is
a lot of metadata loaded using the old v1 catalog. When
there is a 2GB max message size, the statestore can create
and send a >2GB message, but the impalads will reject
it. This can lead to impalads having stale metadata.

This switches to a patched Thrift that uses an int64_t
for the max message size for C++ code. It does not modify
the limit.

The MaxMessageSize error was being swallowed in TAcceptQueueServer.cpp,
so this fixes that location to always print MaxMessageSize
exceptions.

This is only patching the Thrift C++ library. It does not
patch the Thrift Java library. There are a few reasons for
that:
 - This specific issue involves C++ to C++ communication and
   will be solved by patching the C++ library.
 - C++ is easy to patch as it is built via the native-toolchain.
   There is no corresponding mechanism for patching our Java
   dependencies (though one could be developed).
 - Java modifications have implications for other dependencies
   like Hive which use Thrift to communicate with HMS.
For the Java code that uses max message size, this converts
the 64-bit value to 32-bit value by capping the value at
Integer.MAX_VALUE.

Testing:
 - Added enough tables to produce a >2GB catalog-topic and
   restarted an impalad with a higher limit specific. Without
   the patch, the catalog-topic update would be rejected by the
   impalad. With the patch, it succeeds.

Change-Id: I681b1849cc565dcb25de8c070c18776ce69cbb87
Reviewed-on: http://gerrit.cloudera.org:8080/21367
Reviewed-by: Michael Smith 
Reviewed-by: Joe McDonnell 
Tested-by: Joe McDonnell 


> catalog-topic updates >2GB do not work due to Thrift's max message size
> ---
>
> Key: IMPALA-13020
> URL: https://issues.apache.org/jira/browse/IMPALA-13020
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.2.0, Impala 4.3.0
>Reporter: Joe McDonnell
>Priority: Critical
>
> Thrift 0.16.0 added a max message size to protect against malicious packets 
> that can consume a large amount of memory on the receiver side. This max 
> message size is a signed 32-bit integer, so it maxes out at 2GB (which we set 
> via thrift_rpc_max_message_size).
> In catalog v1, the catalog-update statestore topic can become larger than 2GB 
> when there are a large number of tables / partitions / files. If this happens 
> and an Impala coordinator needs to start up (or needs a full topic update for 
> any other reason), it is expecting the statestore to send it the full topic 
> update, but the coordinator actually can't process the message. The 
> deserialization of the message hits the 2GB max message size limit and fails.
> On the statestore side, it shows this message:
> {noformat}
> I0418 16:54:51.727290 3844140 statestore.cc:507] Preparing initial 
> catalog-update topic update for 
> impa...@mcdonnellthrift.vpc.cloudera.com:27000. Size = 2.27 GB
> I0418 16:54:53.889446 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889488 3844140 client-cache.cc:82] ReopenClient(): re-creating 
> client for mcdonnellthrift.vpc.cloudera.com:23000
> I0418 16:54:53.889493 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889503 3844140 thrift-client.cc:116] Error closing connection 
> to: mcdonnellthrift.vpc.cloudera.com:23000, ignoring (write() send(): Broken 
> pipe)
> I0418 16:54:56.052882 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:56.052932 3844140 client-cache.h:363] RPC Error: Client for 
> mcdonnellthrift.vpc.cloudera.com:23000 hit an unexpected exception: write() 
> send(): Broken pipe, type: N6apache6thrift9transport19TTransportExceptionE, 
> rpc: N6impala20TUpdateStateResponseE, send: not done
> I0418 16:54:56.052937 3844140 client-cache.cc:174] Broken Connection, destroy 
> client for 

[jira] [Commented] (IMPALA-13020) catalog-topic updates >2GB do not work due to Thrift's max message size

2024-05-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847420#comment-17847420
 ] 

ASF subversion and git services commented on IMPALA-13020:
--

Commit bcff4df6194b2f192d937bb9c031721feccb69df in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=bcff4df61 ]

IMPALA-13020 (part 2): Split out external vs internal Thrift max message size

The Thrift max message size is designed to protect against malicious
messages that consume a lot of memory on the receiver. This is an
important security measure for externally facing services, but it
can interfere with internal communication within the cluster.
Currently, the max message size is controlled by a single startup
flag for both. This puts tensions between having a low value to
protect against malicious messages versus having a high value to
avoid issues with internal communication (e.g. large statestore
updates).

This introduces a new flag thrift_external_rpc_max_message_size to
specify the limit for externally-facing services. The current
thrift_rpc_max_message_size now applies only for internal services.
Splitting them apart allows setting a much higher value for
internal services (64GB) while leaving the externally facing services
using the current 2GB limit.

This modifies various code locations that wrap a Thrift transport to
pass in the original transport's TConfiguration. This also adds DCHECKs
to make sure that the new transport inherits the max message size. This
limits the locations where we actually need to set max message size.

ThriftServer/ThriftServerBuilder have a setting "is_external_facing"
which can be specified on each ThriftServer. This modifies statestore
and catalog to set is_external_facing to false. All other servers stay
with the default of true.

Testing:
 - This adds a test case to verify that is_external_facing uses the
   higher limit.
 - Ran through the steps in testdata/scale_test_metadata/README.md
   and updated the value in that doc.
 - Created many tables to push the catalog-update topic to be >2GB
   and verified that statestore successfully sends it when an impalad
   restarts.

Change-Id: Ib9a649ef49a8a99c7bd9a1b73c37c4c621661311
Reviewed-on: http://gerrit.cloudera.org:8080/21420
Tested-by: Impala Public Jenkins 
Reviewed-by: Riza Suminto 
Reviewed-by: Michael Smith 


> catalog-topic updates >2GB do not work due to Thrift's max message size
> ---
>
> Key: IMPALA-13020
> URL: https://issues.apache.org/jira/browse/IMPALA-13020
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.2.0, Impala 4.3.0
>Reporter: Joe McDonnell
>Priority: Critical
>
> Thrift 0.16.0 added a max message size to protect against malicious packets 
> that can consume a large amount of memory on the receiver side. This max 
> message size is a signed 32-bit integer, so it maxes out at 2GB (which we set 
> via thrift_rpc_max_message_size).
> In catalog v1, the catalog-update statestore topic can become larger than 2GB 
> when there are a large number of tables / partitions / files. If this happens 
> and an Impala coordinator needs to start up (or needs a full topic update for 
> any other reason), it is expecting the statestore to send it the full topic 
> update, but the coordinator actually can't process the message. The 
> deserialization of the message hits the 2GB max message size limit and fails.
> On the statestore side, it shows this message:
> {noformat}
> I0418 16:54:51.727290 3844140 statestore.cc:507] Preparing initial 
> catalog-update topic update for 
> impa...@mcdonnellthrift.vpc.cloudera.com:27000. Size = 2.27 GB
> I0418 16:54:53.889446 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889488 3844140 client-cache.cc:82] ReopenClient(): re-creating 
> client for mcdonnellthrift.vpc.cloudera.com:23000
> I0418 16:54:53.889493 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:53.889503 3844140 thrift-client.cc:116] Error closing connection 
> to: mcdonnellthrift.vpc.cloudera.com:23000, ignoring (write() send(): Broken 
> pipe)
> I0418 16:54:56.052882 3844140 thrift-util.cc:198] TSocket::write_partial() 
> send() : Broken pipe
> I0418 16:54:56.052932 3844140 client-cache.h:363] RPC Error: Client for 
> mcdonnellthrift.vpc.cloudera.com:23000 hit an unexpected exception: write() 
> send(): Broken pipe, type: N6apache6thrift9transport19TTransportExceptionE, 
> rpc: N6impala20TUpdateStateResponseE, send: not done
> I0418 16:54:56.052937 3844140 client-cache.cc:174] Broken Connection, destroy 
> client for mcdonnellthrift.vpc.cloudera.com:23000{noformat}
> On the Impala side, it doesn't 

[jira] [Commented] (IMPALA-13055) Some Iceberg metadata table tests doesn't assert

2024-05-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847380#comment-17847380
 ] 

ASF subversion and git services commented on IMPALA-13055:
--

Commit 3a8eb999cbc746c055708425e071c30e3c00422e in impala's branch 
refs/heads/master from Gabor Kaszab
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=3a8eb999c ]

IMPALA-13055: Some Iceberg metadata table tests don't assert

Some tests in the Iceberg metadata table suite use the following regex
to verify numbers in the output: [1-9]\d*|0
However, if this format is given, the test unconditionally passes.

This patch changes this format to \d+ and fixes the test results that
incorrectly passed before due to the test not asserting.

Opened IMPALA-13067 to investigate why the test framework works like
this for |0 in the regexes.

Change-Id: Ie47093f25a70253b3e6faca27d466d7cf6999fad
Reviewed-on: http://gerrit.cloudera.org:8080/21394
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Some Iceberg metadata table tests doesn't assert
> 
>
> Key: IMPALA-13055
> URL: https://issues.apache.org/jira/browse/IMPALA-13055
> Project: IMPALA
>  Issue Type: Test
>Reporter: Gabor Kaszab
>Priority: Major
>  Labels: impala-iceberg
>
> Some test in the Iceberg metadata table suite use the following regex to 
> verify numbers in the output: [1-9]\d*|0
> However, if this format is given, the test unconditionally passes. On could 
> put the formula within parentheses, or simply verify for \d+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13067) Some regex make the tests unconditionally pass

2024-05-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847381#comment-17847381
 ] 

ASF subversion and git services commented on IMPALA-13067:
--

Commit 3a8eb999cbc746c055708425e071c30e3c00422e in impala's branch 
refs/heads/master from Gabor Kaszab
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=3a8eb999c ]

IMPALA-13055: Some Iceberg metadata table tests don't assert

Some tests in the Iceberg metadata table suite use the following regex
to verify numbers in the output: [1-9]\d*|0
However, if this format is given, the test unconditionally passes.

This patch changes this format to \d+ and fixes the test results that
incorrectly passed before due to the test not asserting.

Opened IMPALA-13067 to investigate why the test framework works like
this for |0 in the regexes.

Change-Id: Ie47093f25a70253b3e6faca27d466d7cf6999fad
Reviewed-on: http://gerrit.cloudera.org:8080/21394
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Some regex make the tests unconditionally pass
> --
>
> Key: IMPALA-13067
> URL: https://issues.apache.org/jira/browse/IMPALA-13067
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Gabor Kaszab
>Priority: Major
>  Labels: test-framework
>
> This issue came out in the Iceberg metadata table tests where this regex was 
> used:
> [1-9]\d*|0
>  
> The "|0" part for some reason made the test framework confused and then 
> regardless of what you provide as an expected result the tests passed. One 
> workaround was to put the regex expression between parentheses. Or simply use 
> "d+". https://issues.apache.org/jira/browse/IMPALA-13055 applied this second 
> workaround on the tests.
> Some analysis would be great why this is the behavior of the test framework, 
> and if it's indeed the issue of the framnework, we should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12559) Support x5c Parameter in JSON Web Keys (JWK)

2024-05-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846970#comment-17846970
 ] 

ASF subversion and git services commented on IMPALA-12559:
--

Commit 7550eb607c2b92b1367dc5cf5667b681d59a8915 in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=7550eb607 ]

IMPALA-12559 (part 2): Fix build issue for different versions of openssl

Previous patch calls OpenSSL API X509_get0_tbs_sigalg() which is not
available in the version of OpenSSL in ToolChain. It causes build
failures.
This patch fixes the issue by calling X509_get_signature_nid().

Testing:
 - Passed jwt-test unit-test and end-end unit-test.

Change-Id: I62b9f0c00f91c2b13be30c415e3f1ebd0e1bd2bc
Reviewed-on: http://gerrit.cloudera.org:8080/21432
Reviewed-by: gaurav singh 
Tested-by: Impala Public Jenkins 
Reviewed-by: Abhishek Rawat 


> Support x5c Parameter in JSON Web Keys (JWK)
> 
>
> Key: IMPALA-12559
> URL: https://issues.apache.org/jira/browse/IMPALA-12559
> Project: IMPALA
>  Issue Type: Bug
>  Components: be, Security
>Reporter: Jason Fehr
>Assignee: gaurav singh
>Priority: Critical
>  Labels: JWT, jwt, security
>
> The ["x5u"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.6], 
> ["x5c"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.7], 
> ["x5t"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.8], and 
> ["x5t#S256|https://datatracker.ietf.org/doc/html/rfc7517#section-4.9] 
> parameters in JWKs is not supported by Impala.  Implement support for this 
> parameter using the available methods in the [Thalhammer/jwt-cpp 
> library|https://github.com/Thalhammer/jwt-cpp/blob/ce1f9df3a9f861d136d6f0c93a6f811c364d1d3d/example/jwks-verify.cpp].
> Note:  If the "alg" property is specified and so is "x5u" or "x5c", then the 
> value of the "alg" property must match the algorithm on the certificate from 
> the "x5u" or "x5c" property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12559) Support x5c Parameter in JSON Web Keys (JWK)

2024-05-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846760#comment-17846760
 ] 

ASF subversion and git services commented on IMPALA-12559:
--

Commit 34c084cebb2f52a6ee11d3d93609b3e4e238816f in impala's branch 
refs/heads/master from gaurav1086
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=34c084ceb ]

IMPALA-12559: Support x5c Parameter for RSA JSON
Web Keys

This enables the jwt verification using the x5c
certificate(s) in the RSA jwks keys. The x5c claim can be
part of the jwks either as a string or an array.
This patch only supports a single x5c certificate per
jwk.

If the "x5c" is present and "alg" is not present,
then "alg" is extracted from the "x5c" certificate using the
signature algorithm. However, if "x5c" is not preseent, then
"alg" is a mandatory field on jwk.

Current mapping of signature algorithm string => algorithm:

sha256WithRSAEncryption => rs256
sha384WithRSAEncryption => rs384
sha512WithRSAEncryption => rs512

If "x5c" is present, then it is given priority over other
mandatory fields like "n", "e" to construct the public key.

Testing:
* added unit test VerifyJwtTokenWithx5cCertificate to
verify jwt with x5c certificate.
* added unit test VerifyJwtTokenWithx5cCertificateWithoutAlg
to verify jwt with x5c certificate without "alg".
* added e2e test testJwtAuthWithJwksX5cHttpUrl to verify
jwt with x5c certificate.

Change-Id: I70be6f9f54190544aa005b2644e2ed8db6f6bb74
Reviewed-on: http://gerrit.cloudera.org:8080/21382
Reviewed-by: Jason Fehr 
Reviewed-by: Wenzhe Zhou 
Tested-by: Impala Public Jenkins 


> Support x5c Parameter in JSON Web Keys (JWK)
> 
>
> Key: IMPALA-12559
> URL: https://issues.apache.org/jira/browse/IMPALA-12559
> Project: IMPALA
>  Issue Type: Bug
>  Components: be, Security
>Reporter: Jason Fehr
>Assignee: gaurav singh
>Priority: Critical
>  Labels: JWT, jwt, security
>
> The ["x5u"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.6], 
> ["x5c"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.7], 
> ["x5t"|https://datatracker.ietf.org/doc/html/rfc7517#section-4.8], and 
> ["x5t#S256|https://datatracker.ietf.org/doc/html/rfc7517#section-4.9] 
> parameters in JWKs is not supported by Impala.  Implement support for this 
> parameter using the available methods in the [Thalhammer/jwt-cpp 
> library|https://github.com/Thalhammer/jwt-cpp/blob/ce1f9df3a9f861d136d6f0c93a6f811c364d1d3d/example/jwks-verify.cpp].
> Note:  If the "alg" property is specified and so is "x5u" or "x5c", then the 
> value of the "alg" property must match the algorithm on the certificate from 
> the "x5u" or "x5c" property.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13079) Add support for FLOAT/DOUBLE in Iceberg metadata tables

2024-05-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846761#comment-17846761
 ] 

ASF subversion and git services commented on IMPALA-13079:
--

Commit bbfba13ed4d084681b542d7c5e1b5156576a603b in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=bbfba13ed ]

IMPALA-13079: Add support for FLOAT/DOUBLE in Iceberg metadata tables

Until now, the float and double data types were not supported in Iceberg
metadata tables. This commit adds support for them.

Testing:
 - added a test table that contains all primitive types (except for
   decimal, which is still not supported), a struct, an array and a map
 - added a test query that queries the `files` metadata table of the
   above table - the 'readable_metrics' struct contains lower and upper
   bounds for all columns in the original table, with the original type

Change-Id: I2171c9aa9b6d2b634b8c511263b1610cb1d7cb29
Reviewed-on: http://gerrit.cloudera.org:8080/21425
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Add support for FLOAT/DOUBLE in Iceberg metadata tables
> ---
>
> Key: IMPALA-13079
> URL: https://issues.apache.org/jira/browse/IMPALA-13079
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Daniel Becker
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-9577) Use `system_unsync` time for Kudu test clusters

2024-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846448#comment-17846448
 ] 

ASF subversion and git services commented on IMPALA-9577:
-

Commit f507a02b60e905c51e80e6139eef00946cf6d453 in impala's branch 
refs/heads/branch-3.4.2 from Grant Henke
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f507a02b6 ]

IMPALA-9577: [test] Use `system_unsync` time for Kudu test clusters

Recently Kudu made enhancements to time source configuration and
adjusted the time source for local clusters/tests to `system_unsync`.

This patch mirrors that behavior in Impala test clusters given there is no
need to require NTP-synchronized clock for a test where all the
participating Kudu masters and tablet servers are run at the same node
using the same local wallclock.

See the Kudu commit here for details:
https://github.com/apache/kudu/commit/eb2b70d4b96be2fc2fdd6b3625acc284ac5774be

While making this change, I removed all ntp related packages and special
handling as they should not be needed in a development environment
any more. I also added curl and gawk which were missing in my
Docker ubuntu environment and broke my testing.

Testing:
I tested with the steps below using Docker for Mac:

  docker rm impala-dev
  docker volume rm impala
  docker run --privileged --interactive --tty --name impala-dev -v impala:/home 
-p 25000:25000 -p 25010:25010 -p 25020:25020 ubuntu:16.04 /bin/bash

  apt-get update
  apt-get install sudo
  adduser --disabled-password --gecos '' impdev
  echo 'impdev ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
  su - impdev
  cd ~

  sudo apt-get --yes install git
  git clone https://git-wip-us.apache.org/repos/asf/impala.git ~/Impala
  cd ~/Impala
  export IMPALA_HOME=`pwd`
  git remote add fork https://github.com/granthenke/impala.git
  git fetch fork
  git checkout kudu-system-time

  $IMPALA_HOME/bin/bootstrap_development.sh

  source $IMPALA_HOME/bin/impala-config.sh
  (pushd fe && mvn -fae test -Dtest=AnalyzeDDLTest)
  (pushd fe && mvn -fae test -Dtest=AnalyzeKuduDDLTest)

  $IMPALA_HOME/bin/start-impala-cluster.py
  ./tests/run-tests.py query_test/test_kudu.py

Change-Id: Id99e5cb58ab988c3ad4f98484be8db193d5eaf99
Reviewed-on: http://gerrit.cloudera.org:8080/15568
Reviewed-by: Impala Public Jenkins 
Reviewed-by: Alexey Serbin 
Tested-by: Impala Public Jenkins 
Reviewed-on: http://gerrit.cloudera.org:8080/21422
Reviewed-by: Alexey Serbin 
Reviewed-by: Zihao Ye 
Tested-by: Quanlong Huang 


> Use `system_unsync` time for Kudu test clusters
> ---
>
> Key: IMPALA-9577
> URL: https://issues.apache.org/jira/browse/IMPALA-9577
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
> Fix For: Impala 4.0.0
>
>
> Recently Kudu made enhancements to time source configuration and adjusted the 
> time source for local clusters/tests to `system_unsync`. Impala should mirror 
> that behavior in Impala test clusters given there is no need to require 
> NTP-synchronized clock for a test where all the participating Kudu masters 
> and tablet servers are run at the same node using the same local wallclock.
>  
> See the Kudu commit here for details: 
> [https://github.com/apache/kudu/commit/eb2b70d4b96be2fc2fdd6b3625acc284ac5774be]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13051) Speed up test_query_log test runs

2024-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846267#comment-17846267
 ] 

ASF subversion and git services commented on IMPALA-13051:
--

Commit 3b35ddc8ca7b0e540fc16c413a170a25e164462b in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=3b35ddc8c ]

IMPALA-13051: Speed up, refactor query log tests

Sets faster default shutdown_grace_period_s and shutdown_deadline_s when
impalad_graceful_shutdown=True in tests. Impala waits until grace period
has passed and all queries are stopped (or deadline is exceeded) before
flushing the query log, so grace period of 0 is sufficient. Adds them in
setup_method to reduce duplication in test declarations.

Re-uses TQueryTableColumn Thrift definitions for testing.

Moves waiting for query log table to exist to setup_method rather than
as a side-effect of get_client.

Refactors workload management code to reduce if-clause nesting.

Adds functional query workload tests for both the sys.impala_query_log
and the sys.impala_query_live tables to assert the names and order of
the individual columns within each table.

Renames the python tests for the sys.impala_query_log table removing the
unnecessary "_query_log_table_" string from the name of each test.

Change-Id: I1127ef041a3e024bf2b262767d56ec5f29bf3855
Reviewed-on: http://gerrit.cloudera.org:8080/21358
Tested-by: Impala Public Jenkins 
Reviewed-by: Riza Suminto 


> Speed up test_query_log test runs
> -
>
> Key: IMPALA-13051
> URL: https://issues.apache.org/jira/browse/IMPALA-13051
> Project: IMPALA
>  Issue Type: Task
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Jason Fehr
>Priority: Minor
>
> test_query_log.py takes 11 minutes to run. Most of them use graceful 
> shutdown, and provide an unnecessary grace period. Optimize test_query_log 
> test runs, and do some other code cleanup around workload management.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13061) Query Live table fails to load if default_transactional_type=insert_only set globally

2024-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845906#comment-17845906
 ] 

ASF subversion and git services commented on IMPALA-13061:
--

Commit 338fedb44703646664e2e22c6e2f35336924db22 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=338fedb44 ]

IMPALA-13061: Create query live as external table

Impala determines whether a managed table is transactional based on the
'transactional' table property. It assumes any managed table with
transactional=true returns non-null getValidWriteIds.

When 'default_transactional_type=insert_only' is set at startup (via
default_query_options), impala_query_live is created as a managed table
with transactional=true, but SystemTables don't implement
getValidWriteIds and are not meant to be transactional.

DataSourceTable has a similar problem, and when a JDBC table is
created setJdbcDataSourceProperties sets transactional=false. This
patch uses CREATE EXTERNAL TABLE sys.impala_Query_live so that it is not
created as a managed table and 'transactional' is not set. That avoids
creating a SystemTable that Impala can't read (it encounters an
IllegalStateException).

Change-Id: Ie60a2bd03fabc63c85bcd9fa2489e9d47cd2aa65
Reviewed-on: http://gerrit.cloudera.org:8080/21401
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 1233ac3c579b5929866dba23debae63e5d2aae90)


> Query Live table fails to load if default_transactional_type=insert_only set 
> globally
> -
>
> Key: IMPALA-13061
> URL: https://issues.apache.org/jira/browse/IMPALA-13061
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> If transactional type defaults to insert_only for all queries via
> {code}
> --default_query_options=default_transactional_type=insert_only
> {code}
> the table definition for {{sys.impala_query_live}} is set to transactional, 
> which causes an exception in catalogd
> {code}
> I0506 22:07:42.808758  3972 jni-util.cc:302] 
> 4547b965aeebc5f0:8ba96c58] java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:496)
> at org.apache.impala.catalog.Table.getPartialInfo(Table.java:851)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.doGetPartialCatalogObject(CatalogServiceCatalog.java:3818)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3714)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3681)
> at 
> org.apache.impala.service.JniCatalog.lambda$getPartialCatalogObject$10(JniCatalog.java:431)
> at 
> org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
> at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
> at 
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
> at 
> org.apache.impala.service.JniCatalogOp.execAndSerializeSilentStartAndFinish(JniCatalogOp.java:109)
> at 
> org.apache.impala.service.JniCatalog.execAndSerializeSilentStartAndFinish(JniCatalog.java:253)
> at 
> org.apache.impala.service.JniCatalog.getPartialCatalogObject(JniCatalog.java:430)
> {code}
> We need to override that setting while creating {{sys.impala_query_live}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13045) Fix intermittent failure in TestQueryLive.test_local_catalog

2024-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845905#comment-17845905
 ] 

ASF subversion and git services commented on IMPALA-13045:
--

Commit 39233ba3d134b8c18f6f208a7d85c3fadf8ee371 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=39233ba3d ]

IMPALA-13045: Wait for impala_query_live to exist

Waits for creation of 'sys.impala_query_live' in tests to ensure it has
been registered with HMS.

Change-Id: I5cc3fa3c43be7af9a5f097359a0d4f20d057a207
Reviewed-on: http://gerrit.cloudera.org:8080/21372
Reviewed-by: Impala Public Jenkins 
Tested-by: Michael Smith 
(cherry picked from commit b35aa819653dce062109e61d8f30171234dce5f9)


> Fix intermittent failure in TestQueryLive.test_local_catalog
> 
>
> Key: IMPALA-13045
> URL: https://issues.apache.org/jira/browse/IMPALA-13045
> Project: IMPALA
>  Issue Type: Task
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
> Fix For: Impala 4.5.0
>
>
> IMPALA-13005 introduced {{drop table sys.impala_query_live}}. In some test 
> environments (notably testing with Ozone), recreating that table in the 
> following test - test_local_catalog - does not occur before running the test 
> case portion that attempts to query that table.
> Update the test to wait for the table to be available.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12910) Run TPCH/TPCDS queries for external JDBC tables

2024-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845902#comment-17845902
 ] 

ASF subversion and git services commented on IMPALA-12910:
--

Commit 01401a0368cb8f19c86dc3fab764ee4b5732f2f6 in impala's branch 
refs/heads/branch-4.4.0 from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=01401a036 ]

IMPALA-12910: Support running TPCH/TPCDS queries for JDBC tables

This patch adds script to create external JDBC tables for the dataset of
TPCH and TPCDS, and adds unit-tests to run TPCH and TPCDS queries for
external JDBC tables with Impala-Impala federation. Note that JDBC
tables are mapping tables, they don't take additional disk spaces.
It fixes the race condition when caching of SQL DataSource objects by
using a new DataSourceObjectCache class, which checks reference count
before closing SQL DataSource.
Adds a new query-option 'clean_dbcp_ds_cache' with default value as
true. When it's set as false, SQL DataSource object will not be closed
when its reference count equals 0 and will be kept in cache until
the SQL DataSource is idle for more than 5 minutes. Flag variable
'dbcp_data_source_idle_timeout_s' is added to make the duration
configurable.
java.sql.Connection.close() fails to remove a closed connection from
connection pool sometimes, which causes JDBC working threads to wait
for available connections from the connection pool for a long time.
The work around is to call BasicDataSource.invalidateConnection() API
to close a connection.
Two flag variables are added for DBCP configuration properties
'maxTotal' and 'maxWaitMillis'. Note that 'maxActive' and 'maxWait'
properties are renamed to 'maxTotal' and 'maxWaitMillis' respectively
in apache.commons.dbcp v2.
Fixes a bug for database type comparison since the type strings
specified by user could be lower case or mix of upper/lower cases, but
the code compares the types with upper case string.
Fixes issue to close SQL DataSource object in JdbcDataSource.open()
and JdbcDataSource.getNext() when some errors returned from DBCP APIs
or JDBC drivers.

testdata/bin/create-tpc-jdbc-tables.py supports to create JDBC tables
for Impala-Impala, Postgres and MySQL.
Following sample commands creates TPCDS JDBC tables for Impala-Impala
federation with remote coordinator running at 10.19.10.86, and Postgres
server running at 10.19.10.86:
  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=IMPALA --database_host=10.19.10.86 --clean

  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=POSTGRES --database_host=10.19.10.86 \
--database_name=tpcds --clean

TPCDS tests for JDBC tables run only for release/exhaustive builds.
TPCH tests for JDBC tables run for core and exhaustive builds, except
Dockerized builds.

Remaining Issues:
 - tpcds-decimal_v2-q80a failed with returned rows not matching expected
   results for some decimal values. This will be fixed in IMPALA-13018.

Testing:
 - Passed core tests.
 - Passed query_test/test_tpcds_queries.py in release/exhaustive build.
 - Manually verified that only one SQL DataSource object was created for
   test_tpcds_queries.py::TestTpcdsQueryForJdbcTables since query option
   'clean_dbcp_ds_cache' was set as false, and the SQL DataSource object
   was closed by cleanup thread.

Change-Id: I44e8c1bb020e90559c7f22483a7ab7a151b8f48a
Reviewed-on: http://gerrit.cloudera.org:8080/21304
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 08f8a300250df7b4f9a517cdb6bab48c379b7e03)


> Run TPCH/TPCDS queries for external JDBC tables
> ---
>
> Key: IMPALA-12910
> URL: https://issues.apache.org/jira/browse/IMPALA-12910
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Perf Investigation
>Reporter: Wenzhe Zhou
>Assignee: Wenzhe Zhou
>Priority: Major
> Fix For: Impala 4.5.0
>
>
> Need performance data for queries on external JDBC tables to be documented in 
> the design doc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11499) Refactor UrlEncode function to handle special characters

2024-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845907#comment-17845907
 ] 

ASF subversion and git services commented on IMPALA-11499:
--

Commit b8a66b0e104f8e25e70fce0326d36c9b48672dbb in impala's branch 
refs/heads/branch-4.4.0 from pranavyl
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b8a66b0e1 ]

IMPALA-11499: Refactor UrlEncode function to handle special characters

An error came from an issue with URL encoding, where certain Unicode
characters were being incorrectly encoded due to their UTF-8
representation matching characters in the set of characters to escape.
For example, the string '运', which consists of three bytes
0xe8 0xbf 0x90 was wrongly getting encoded into '\E8%FFBF\90',
because the middle byte matched one of the two bytes that
represented the "\u00FF" literal. Inclusion of "\u00FF" was likely
a mistake from the beginning and it should have been '\x7F'.

The patch makes three key changes:
1. Before the change, the set of characters that need to be escaped
was stored as a string. The current patch uses an unordered_set
instead.

2. '\xFF', which is an invalid UTF-8 byte and whose inclusion was
erroneous from the beginning, is replaced with '\x7F', which is a
control character for DELETE, ensuring consistency and correctness in
URL encoding.

3. The list of characters to be escaped is extended to match the
current list in Hive.

Testing: Tests on both traditional Hive tables and Iceberg tables
are included in unicode-column-name.test, insert.test,
coding-util-test.cc and test_insert.py.

Change-Id: I88c4aba5d811dfcec809583d0c16fcbc0ca730fb
Reviewed-on: http://gerrit.cloudera.org:8080/21131
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 85cd07a11e876f3d8773f2638f699c61a6b0dd4c)


> Refactor UrlEncode function to handle special characters
> 
>
> Key: IMPALA-11499
> URL: https://issues.apache.org/jira/browse/IMPALA-11499
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Quanlong Huang
>Assignee: Pranav Yogi Lodha
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> Partition values are incorrectly URL-encoded in backend for unicode 
> characters, e.g. '运营业务数据' is encoded to '�%FFBF�营业务数据' which is wrong.
> To reproduce the issue, first create a partition table:
> {code:sql}
> create table my_part_tbl (id int) partitioned by (p string) stored as parquet;
> {code}
> Then insert data into it using partition values containing '运'. They will 
> fail:
> {noformat}
> [localhost:21050] default> insert into my_part_tbl partition(p='运营业务数据') 
> values (0);
> Query: insert into my_part_tbl partition(p='运营业务数据') values (0)
> Query submitted at: 2022-08-16 10:03:56 (Coordinator: 
> http://quanlong-OptiPlex-BJ:25000)
> Query progress can be monitored at: 
> http://quanlong-OptiPlex-BJ:25000/query_plan?query_id=404ac3027c4b7169:39d16a2d
> ERROR: Error(s) moving partition files. First error (of 1) was: Hdfs op 
> (RENAME 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/404ac3027c4b7169_39d16a2d/.404ac3027c4b7169-39d16a2d_1475855322_dir/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq
>  TO 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq)
>  failed, error was: 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/404ac3027c4b7169_39d16a2d/.404ac3027c4b7169-39d16a2d_1475855322_dir/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq
> Error(5): Input/output error
> [localhost:21050] default> insert into my_part_tbl partition(p='运') values 
> (0);
> Query: insert into my_part_tbl partition(p='运') values (0)
> Query submitted at: 2022-08-16 10:04:22 (Coordinator: 
> http://quanlong-OptiPlex-BJ:25000)
> Query progress can be monitored at: 
> http://quanlong-OptiPlex-BJ:25000/query_plan?query_id=a64e5883473ec28d:86e7e335
> ERROR: Error(s) moving partition files. First error (of 1) was: Hdfs op 
> (RENAME 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/a64e5883473ec28d_86e7e335/.a64e5883473ec28d-86e7e335_1582623091_dir/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq
>  TO 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq)
>  failed, error was: 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/a64e5883473ec28d_86e7e335/.a64e5883473ec28d-86e7e335_1582623091_dir/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq
> 

[jira] [Commented] (IMPALA-13018) Fix test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcds-decimal_v2-q80a failure

2024-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845903#comment-17845903
 ] 

ASF subversion and git services commented on IMPALA-13018:
--

Commit 01401a0368cb8f19c86dc3fab764ee4b5732f2f6 in impala's branch 
refs/heads/branch-4.4.0 from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=01401a036 ]

IMPALA-12910: Support running TPCH/TPCDS queries for JDBC tables

This patch adds script to create external JDBC tables for the dataset of
TPCH and TPCDS, and adds unit-tests to run TPCH and TPCDS queries for
external JDBC tables with Impala-Impala federation. Note that JDBC
tables are mapping tables, they don't take additional disk spaces.
It fixes the race condition when caching of SQL DataSource objects by
using a new DataSourceObjectCache class, which checks reference count
before closing SQL DataSource.
Adds a new query-option 'clean_dbcp_ds_cache' with default value as
true. When it's set as false, SQL DataSource object will not be closed
when its reference count equals 0 and will be kept in cache until
the SQL DataSource is idle for more than 5 minutes. Flag variable
'dbcp_data_source_idle_timeout_s' is added to make the duration
configurable.
java.sql.Connection.close() fails to remove a closed connection from
connection pool sometimes, which causes JDBC working threads to wait
for available connections from the connection pool for a long time.
The work around is to call BasicDataSource.invalidateConnection() API
to close a connection.
Two flag variables are added for DBCP configuration properties
'maxTotal' and 'maxWaitMillis'. Note that 'maxActive' and 'maxWait'
properties are renamed to 'maxTotal' and 'maxWaitMillis' respectively
in apache.commons.dbcp v2.
Fixes a bug for database type comparison since the type strings
specified by user could be lower case or mix of upper/lower cases, but
the code compares the types with upper case string.
Fixes issue to close SQL DataSource object in JdbcDataSource.open()
and JdbcDataSource.getNext() when some errors returned from DBCP APIs
or JDBC drivers.

testdata/bin/create-tpc-jdbc-tables.py supports to create JDBC tables
for Impala-Impala, Postgres and MySQL.
Following sample commands creates TPCDS JDBC tables for Impala-Impala
federation with remote coordinator running at 10.19.10.86, and Postgres
server running at 10.19.10.86:
  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=IMPALA --database_host=10.19.10.86 --clean

  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=POSTGRES --database_host=10.19.10.86 \
--database_name=tpcds --clean

TPCDS tests for JDBC tables run only for release/exhaustive builds.
TPCH tests for JDBC tables run for core and exhaustive builds, except
Dockerized builds.

Remaining Issues:
 - tpcds-decimal_v2-q80a failed with returned rows not matching expected
   results for some decimal values. This will be fixed in IMPALA-13018.

Testing:
 - Passed core tests.
 - Passed query_test/test_tpcds_queries.py in release/exhaustive build.
 - Manually verified that only one SQL DataSource object was created for
   test_tpcds_queries.py::TestTpcdsQueryForJdbcTables since query option
   'clean_dbcp_ds_cache' was set as false, and the SQL DataSource object
   was closed by cleanup thread.

Change-Id: I44e8c1bb020e90559c7f22483a7ab7a151b8f48a
Reviewed-on: http://gerrit.cloudera.org:8080/21304
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 08f8a300250df7b4f9a517cdb6bab48c379b7e03)


> Fix 
> test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcds-decimal_v2-q80a 
> failure
> 
>
> Key: IMPALA-13018
> URL: https://issues.apache.org/jira/browse/IMPALA-13018
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Reporter: Wenzhe Zhou
>Assignee: Wenzhe Zhou
>Priority: Major
> Fix For: Impala 4.5.0
>
>
> The returned rows are not matching expected results for some decimal type of 
> columns. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13038) Support profile tab for imported query profiles

2024-05-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845519#comment-17845519
 ] 

ASF subversion and git services commented on IMPALA-13038:
--

Commit 0d215da8d4e3f93ad3c1cd72aa801fbcb9464fb0 in impala's branch 
refs/heads/master from Surya Hebbar
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=0d215da8d ]

IMPALA-13038: Support profile tab for imported query profiles

For query profile imports currently the following tabs are supported.
 - Query Statement
 - Query Timeline
 - Query Text Plan

With the current patch "Query Profile" tab will also be supported.

In the "QueryProfileHandler", "query_id" is now added before verifying
its existence in the query log as in "QuerySummaryHandler" and others.

"getQueryID" function has been added to "util.js", as it is helpful
across multiple query pages for retrieving the query ID into JS scripts,
before the page loads up.

On loading the imported "Query Profile" page, query profile download
section and server's non-existing query ID alerts are removed.
All unsupported navbar tabs are removed and current tab is set to active.

The query profile is retrieved from the indexedDB's "imported_queries"
database. Then query profile is passed onto "profileToString" function,
which converts the profile into indented text for displaying on the
profile page.

Each profile and its child profiles are printed in the following order
with the right indentation(fields are skipped, if they do not exist).

Profile name:
  - Info strings:
  - Event sequences:
- Offset:
- Events:
  - Child profile(recursive):
  - Counters:

Change-Id: Iddcf2e285abbf42f97bde19014be076ccd6374bc
Reviewed-on: http://gerrit.cloudera.org:8080/21400
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Support profile tab for imported query profiles
> ---
>
> Key: IMPALA-13038
> URL: https://issues.apache.org/jira/browse/IMPALA-13038
> Project: IMPALA
>  Issue Type: New Feature
>Reporter: Surya Hebbar
>Assignee: Surya Hebbar
>Priority: Major
> Attachments: json_profile_a34485359bfdfe1f_3ca8177b.json, 
> json_profile_a34485359bfdfe1f_3ca8177b.txt
>
>
> Query profile imports currently support the following tabs.
>  - Query Statement
>  - Query Timeline
>  - Query Text Plan
> It would be helpful to support "Query Profile" tab for these imports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13036) Document Iceberg metadata tables

2024-05-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845335#comment-17845335
 ] 

ASF subversion and git services commented on IMPALA-13036:
--

Commit aba27edc3338765a6b5133be095989f83cce4747 in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=aba27edc3 ]

IMPALA-13036: Document Iceberg metadata tables

This change adds documentation on how Iceberg metadata tables can be
used.

Testing:
 - built docs locally

Change-Id: Ic453f567b814cb4363a155e2008029e94efb6ed1
Reviewed-on: http://gerrit.cloudera.org:8080/21387
Tested-by: Impala Public Jenkins 
Reviewed-by: Peter Rozsa 


> Document Iceberg metadata tables
> 
>
> Key: IMPALA-13036
> URL: https://issues.apache.org/jira/browse/IMPALA-13036
> Project: IMPALA
>  Issue Type: Documentation
>Reporter: Daniel Becker
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>
> Impala now supports displaying Iceberg metadata tables, we should document 
> this feature.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11328) Mistmatch on max_errors documentation

2024-05-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845319#comment-17845319
 ] 

ASF subversion and git services commented on IMPALA-11328:
--

Commit aac7f527da1953fcc304bda9e7e5214585fdbf18 in impala's branch 
refs/heads/master from m-sanjana19
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=aac7f527d ]

IMPALA-11328: [DOCS] Fix incorrect default value for max_errors

Change-Id: I442cd3ff51520c12376a13d7c78565542793d908
Reviewed-on: http://gerrit.cloudera.org:8080/21419
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Mistmatch on max_errors documentation
> -
>
> Key: IMPALA-11328
> URL: https://issues.apache.org/jira/browse/IMPALA-11328
> Project: IMPALA
>  Issue Type: Documentation
>  Components: Docs
>Affects Versions: Impala 4.0.0
>Reporter: Riza Suminto
>Assignee: Sanjana Malhotra
>Priority: Minor
>
> The doc mention that max_errors default to 1000.
> [https://github.com/apache/impala/blob/62683e0ebb78902e142975971c93b8fa011fb632/docs/topics/impala_max_errors.xml#L55]
>  
> But the code actually default to 100.
> [https://github.com/apache/impala/blob/62683e0ebb78902e142975971c93b8fa011fb632/be/src/runtime/query-state.cc#L125]
>  
> [https://github.com/apache/impala/blob/62683e0ebb78902e142975971c93b8fa011fb632/common/thrift/Query.thrift#L134]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13018) Fix test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcds-decimal_v2-q80a failure

2024-05-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845222#comment-17845222
 ] 

ASF subversion and git services commented on IMPALA-13018:
--

Commit 3cbb3be5f72dbb889744675fa109dbd1659a7a84 in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=3cbb3be5f ]

IMPALA-13018: Block push down of conjuncts with implicit casting on base 
columns for jdbc tables

The query of q80a consists BETWEEN with casting to timestamp in where
clause like:
  d_date between cast('2000-08-23' as timestamp)
and (cast('2000-08-23' as timestamp) + interval 30 days)
Between predicate does cast all exprs to compatible types. Planner
generates predicates for DataSourceScanNode as:
  CAST(d_date AS TIMESTAMP) >= TIMESTAMP '2000-08-23 00:00:00',
  CAST(d_date AS TIMESTAMP) <= TIMESTAMP '2000-09-22 00:00:00'
But casting to Date/Timestamp for a column cannot be pushed down to JDBC
table now. This patch fixes the issue by blocking such conjuncts with
implicit unsafe casting or casting to date/timestamp to be added into
offered predicate list for JDBC table.
Note that explicit casting on base columns are not allowed to
pushdown.

Testing:
 - Add new planner unit-tests, including explicit casting, implicit
   casting to date/timestamp, built-in functions, arithmetic
   expressions.
   The predicates which are accepted for JDBC are shown in plan under
   "data source predicates" of DataSourceScanNode, predicates which
   are not accepted for JDBC are shown in plan under "predicates" of
   DataSourceScanNodes.
 - Passed all tpcds queries for JDBC tables, including q80a.
 - Passed core test

Change-Id: Iabd7e28b8d5f11f25a000dc4c9ab65895056b572
Reviewed-on: http://gerrit.cloudera.org:8080/21409
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Fix 
> test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcds-decimal_v2-q80a 
> failure
> 
>
> Key: IMPALA-13018
> URL: https://issues.apache.org/jira/browse/IMPALA-13018
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Reporter: Wenzhe Zhou
>Assignee: Wenzhe Zhou
>Priority: Major
>
> The returned rows are not matching expected results for some decimal type of 
> columns. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12607) Bump GBN to get HMS thift API change HIVE-27499

2024-05-10 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845221#comment-17845221
 ] 

ASF subversion and git services commented on IMPALA-12607:
--

Commit 68f8a6a1df0d2da91baa87b8b6699ddbc495b88e in impala's branch 
refs/heads/master from Sai Hemanth Gantasala
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=68f8a6a1d ]

IMPALA-12607: Bump the GBN and fetch events specific to the db/table
from the metastore

Bump the GBN to 49623641 to leverage HIVE-27499, so that Impala can
directly fetch the latest events specific to the db/table from the
metastore, instead of fetching the events from metastore and then
filtering in the cache matching the DbName/TableName.

Implementation Details:
Currently when a DDL/DML is performed in Impala, we fetch all the
events from metastore based on current eventId and then filter them in
Impala which can be a bottleneck if the events count is huge. This can
be optimized by including db name and/or table name in the notification
event request object and then filter by event type in impala. This can
provide performance boost on tables that generate a lot of events.

Note:
Also included ShowUtils class in hive-minimal-exec jar as it is
required in the current build version

Testing:
1) Did some tests in local cluster
2) Added a test case in MetaStoreEventsProcessorTest

Change-Id: I6aecd5108b31c24e6e2c6f9fba6d4d44a3b00729
Reviewed-on: http://gerrit.cloudera.org:8080/20979
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Bump GBN to get HMS thift API change HIVE-27499
> ---
>
> Key: IMPALA-12607
> URL: https://issues.apache.org/jira/browse/IMPALA-12607
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: catalog-2024
>
> Leverage HIVE-27499, so that Impala can directly fetch the latest events 
> specific to the database/table from the metastore, instead of fetching the 
> events from metastore and then filtering in the cache matching the 
> DbName/TableName.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13040) SIGSEGV in QueryState::UpdateFilterFromRemote

2024-05-09 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845175#comment-17845175
 ] 

ASF subversion and git services commented on IMPALA-13040:
--

Commit 09d2f10f4ddf3499b6255a6d14653e7738c2928b in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=09d2f10f4 ]

IMPALA-13040: Add waiting mechanism in UpdateFilterFromRemote

It is possible to have UpdateFilterFromRemote RPC arrive to an impalad
executor before QueryState of the destination query is created or
complete initialization. This patch add wait mechanism in
UpdateFilterFromRemote RPC endpoint to wait for few miliseconds until
QueryState exist and complete initialization.

The wait time is fixed at 500ms, with exponential sleep period in
between. If wait time passed and QueryState still not found or
initialized, UpdateFilterFromRemote RPC is deemed fail and query
execution move on without complete filter.

Testing:
- Add BE tests in network-util-test.cc
- Add test_runtime_filter_aggregation.py::TestLateQueryStateInit
- Pass exhastive runs of test_runtime_filter_aggregation.py,
  test_query_live.py, and test_query_log.py

Change-Id: I156d1f0c694b91ba34be70bc53ae9bacf924b3b9
Reviewed-on: http://gerrit.cloudera.org:8080/21383
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> SIGSEGV in  QueryState::UpdateFilterFromRemote
> --
>
> Key: IMPALA-13040
> URL: https://issues.apache.org/jira/browse/IMPALA-13040
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Csaba Ringhofer
>Priority: Critical
>
> {code}
> Crash reason:  SIGSEGV /SEGV_MAPERR
> Crash address: 0x48
> Process uptime: not available
> Thread 114 (crashed)
>  0  libpthread.so.0 + 0x9d00
> rax = 0x00019e57ad00   rdx = 0x2a656720
> rcx = 0x059a9860   rbx = 0x
> rsi = 0x00019e57ad00   rdi = 0x0038
> rbp = 0x7f6233d544e0   rsp = 0x7f6233d544a8
>  r8 = 0x06a53540r9 = 0x0039
> r10 = 0x   r11 = 0x000a
> r12 = 0x00019e57ad00   r13 = 0x7f62a2f997d0
> r14 = 0x7f6233d544f8   r15 = 0x1632c0f0
> rip = 0x7f62a2f96d00
> Found by: given as instruction pointer in context
>  1  
> impalad!impala::QueryState::UpdateFilterFromRemote(impala::UpdateFilterParamsPB
>  const&, kudu::rpc::RpcContext*) [query-state.cc : 1033 + 0x5]
> rbp = 0x7f6233d54520   rsp = 0x7f6233d544f0
> rip = 0x015c0837
> Found by: previous frame's frame pointer
>  2  
> impalad!impala::DataStreamService::UpdateFilterFromRemote(impala::UpdateFilterParamsPB
>  const*, impala::UpdateFilterResultPB*, kudu::rpc::RpcContext*) 
> [data-stream-service.cc : 134 + 0xb]
> rbp = 0x7f6233d54640   rsp = 0x7f6233d54530
> rip = 0x017c05de
> Found by: previous frame's frame pointer
> {code}
> The line that crashes is 
> https://github.com/apache/impala/blob/b39cd79ae84c415e0aebec2c2b4d7690d2a0cc7a/be/src/runtime/query-state.cc#L1033
> My guess is that inside the actual segfault is within WaitForPrepare() but it 
> was inlined. Not sure if a remote filter can arrive even before 
> QueryState::Init is finished - that would explain the issue, as 
> instances_prepared_barrier_ is not yet created at that point.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10451) TestAvroSchemaResolution.test_avro_schema_resolution fails when bumping Hive to have HIVE-24157

2024-05-09 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845053#comment-17845053
 ] 

ASF subversion and git services commented on IMPALA-10451:
--

Commit d1d28c0eec52c212b4b8cd09080327c542f24bfa in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d1d28c0ee ]

IMPALA-10451: Fix avro table loading failures caused by HIVE-24157

HIVE-24157 introduces a restriction to prohibit casting DATE/TIMESTAMP
types to and from NUMERIC. It's enabled by default and can be turned off
by set hive.strict.timestamp.conversion=false.

This restriction breaks the data loading on avro_coldef and
avro_extra_coldef tables, which results in empty data set and finally
fails TestAvroSchemaResolution.test_avro_schema_resolution.

This patch explicitly disables the restriction in loading these two avro
tables.

The Hive version currently used for development does not have HIVE-24157,
but upstream Hive does have it. Adding hive.strict.timestamp.conversion
does not cause problems for Hive versions that don't have HIVE-24157.

Tests:
 - Run the data loading and test_avro_schema_resolution locally using
   a Hive that has HIVE-24157.
 - Run CORE tests
 - Run data loading with a Hive that doesn't have HIVE-24157.

Change-Id: I3e2a47d60d4079fece9c04091258215f3d6a7b52
Reviewed-on: http://gerrit.cloudera.org:8080/21413
Reviewed-by: Impala Public Jenkins 
Tested-by: Joe McDonnell 


> TestAvroSchemaResolution.test_avro_schema_resolution fails when bumping Hive 
> to have HIVE-24157
> ---
>
> Key: IMPALA-10451
> URL: https://issues.apache.org/jira/browse/IMPALA-10451
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Quanlong Huang
>Assignee: Joe McDonnell
>Priority: Major
>
> TestAvroSchemaResolution.test_avro_schema_resolution recently fails when 
> building against a Hive version with HIVE-24157.
> {code:java}
> query_test.test_avro_schema_resolution.TestAvroSchemaResolution.test_avro_schema_resolution[protocol:
>  beeswax | exec_option: \{'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> avro/snap/block] (from pytest)
> query_test/test_avro_schema_resolution.py:36: in test_avro_schema_resolution
>  self.run_test_case('QueryTest/avro-schema-resolution', vector, 
> unique_database)
> common/impala_test_suite.py:690: in run_test_case
>  self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:523: in __verify_results_and_errors
>  replace_filenames_with_placeholder)
> common/test_result_verifier.py:456: in verify_raw_results
>  VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:278: in verify_query_result_is_equal
>  assert expected_results == actual_results
> E assert Comparing QueryTestResults (expected vs actual):
> E 10 != 0 
> {code}
> The failed query is
> {code:sql}
> select count(*) from functional_avro_snap.avro_coldef {code}
> The cause is that data loading for avro_coldef failed. The DML is
> {code:sql}
> INSERT OVERWRITE TABLE avro_coldef PARTITION(year=2014, month=1)
> SELECT bool_col, tinyint_col, smallint_col, int_col, bigint_col,
> float_col, double_col, date_string_col, string_col, timestamp_col
> FROM (select * from functional.alltypes order by id limit 5) a;
> {code}
> The failure (found in HS2) is:
> {code}
> 2021-01-24T01:52:16,340 ERROR [9433ee64-d706-4fa4-a146-18d71bf17013 
> HiveServer2-Handler-Pool: Thread-4946] parse.CalcitePlanner: CBO failed, 
> skipping CBO.
> org.apache.hadoop.hive.ql.exec.UDFArgumentException: Casting DATE/TIMESTAMP 
> types to NUMERIC is prohibited (hive.strict.timestamp.conversion)
>  at 
> org.apache.hadoop.hive.ql.udf.TimestampCastRestrictorResolver.getEvalMethod(TimestampCastRestrictorResolver.java:62)
>  ~[hive-exec-3.1.3000.7.1.6.0-169.jar:3.1.3000.7.1.6.0-169]
>  at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:168)
>  ~[hive-exec-3.1.3000.7.1.6.0-169.jar:3.1.3000.7.1.6.0-169]
>  at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:149)
>  ~[hive-exec-3.1.3000.7.1.6.0-169.jar:3.1.3000.7.1.6.0-169]
>  at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:260)
>  ~[hive-exec-3.1.3000.7.1.6.0-169.jar:3.1.3000.7.1.6.0-169]
>  at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:292)
>  ~[hive-exec-3.1.3000.7.1.6.0-169.jar:3.1.3000.7.1.6.0-169]
>  at 
> 

[jira] [Commented] (IMPALA-11499) Refactor UrlEncode function to handle special characters

2024-05-09 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845052#comment-17845052
 ] 

ASF subversion and git services commented on IMPALA-11499:
--

Commit 85cd07a11e876f3d8773f2638f699c61a6b0dd4c in impala's branch 
refs/heads/master from pranavyl
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=85cd07a11 ]

IMPALA-11499: Refactor UrlEncode function to handle special characters

An error came from an issue with URL encoding, where certain Unicode
characters were being incorrectly encoded due to their UTF-8
representation matching characters in the set of characters to escape.
For example, the string '运', which consists of three bytes
0xe8 0xbf 0x90 was wrongly getting encoded into '\E8%FFBF\90',
because the middle byte matched one of the two bytes that
represented the "\u00FF" literal. Inclusion of "\u00FF" was likely
a mistake from the beginning and it should have been '\x7F'.

The patch makes three key changes:
1. Before the change, the set of characters that need to be escaped
was stored as a string. The current patch uses an unordered_set
instead.

2. '\xFF', which is an invalid UTF-8 byte and whose inclusion was
erroneous from the beginning, is replaced with '\x7F', which is a
control character for DELETE, ensuring consistency and correctness in
URL encoding.

3. The list of characters to be escaped is extended to match the
current list in Hive.

Testing: Tests on both traditional Hive tables and Iceberg tables
are included in unicode-column-name.test, insert.test,
coding-util-test.cc and test_insert.py.

Change-Id: I88c4aba5d811dfcec809583d0c16fcbc0ca730fb
Reviewed-on: http://gerrit.cloudera.org:8080/21131
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Refactor UrlEncode function to handle special characters
> 
>
> Key: IMPALA-11499
> URL: https://issues.apache.org/jira/browse/IMPALA-11499
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Quanlong Huang
>Assignee: Pranav Yogi Lodha
>Priority: Critical
>
> Partition values are incorrectly URL-encoded in backend for unicode 
> characters, e.g. '运营业务数据' is encoded to '�%FFBF�营业务数据' which is wrong.
> To reproduce the issue, first create a partition table:
> {code:sql}
> create table my_part_tbl (id int) partitioned by (p string) stored as parquet;
> {code}
> Then insert data into it using partition values containing '运'. They will 
> fail:
> {noformat}
> [localhost:21050] default> insert into my_part_tbl partition(p='运营业务数据') 
> values (0);
> Query: insert into my_part_tbl partition(p='运营业务数据') values (0)
> Query submitted at: 2022-08-16 10:03:56 (Coordinator: 
> http://quanlong-OptiPlex-BJ:25000)
> Query progress can be monitored at: 
> http://quanlong-OptiPlex-BJ:25000/query_plan?query_id=404ac3027c4b7169:39d16a2d
> ERROR: Error(s) moving partition files. First error (of 1) was: Hdfs op 
> (RENAME 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/404ac3027c4b7169_39d16a2d/.404ac3027c4b7169-39d16a2d_1475855322_dir/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq
>  TO 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq)
>  failed, error was: 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/404ac3027c4b7169_39d16a2d/.404ac3027c4b7169-39d16a2d_1475855322_dir/p=�%FFBF�营业务数据/404ac3027c4b7169-39d16a2d_1585092794_data.0.parq
> Error(5): Input/output error
> [localhost:21050] default> insert into my_part_tbl partition(p='运') values 
> (0);
> Query: insert into my_part_tbl partition(p='运') values (0)
> Query submitted at: 2022-08-16 10:04:22 (Coordinator: 
> http://quanlong-OptiPlex-BJ:25000)
> Query progress can be monitored at: 
> http://quanlong-OptiPlex-BJ:25000/query_plan?query_id=a64e5883473ec28d:86e7e335
> ERROR: Error(s) moving partition files. First error (of 1) was: Hdfs op 
> (RENAME 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/a64e5883473ec28d_86e7e335/.a64e5883473ec28d-86e7e335_1582623091_dir/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq
>  TO 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq)
>  failed, error was: 
> hdfs://localhost:20500/test-warehouse/my_part_tbl/_impala_insert_staging/a64e5883473ec28d_86e7e335/.a64e5883473ec28d-86e7e335_1582623091_dir/p=�%FFBF�/a64e5883473ec28d-86e7e335_163454510_data.0.parq
> Error(5): Input/output error
> {noformat}
> However, partition value without the character '运' is OK:
> {noformat}

[jira] [Commented] (IMPALA-12934) Import parser files from Calcite into Impala

2024-05-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844829#comment-17844829
 ] 

ASF subversion and git services commented on IMPALA-12934:
--

Commit 2a3ce2071baff88b08f00a2e1855560f1d75eb99 in impala's branch 
refs/heads/master from Steve Carlin
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=2a3ce2071 ]

IMPALA-12934: Added Calcite parsing files to Impala

Adding the framework to create our own parsing syntax for Impala using
the base Calcite Parser.jj file.

The Parser.jj file here was grabbed from Calcite 1.36. So with this commit,
we are using the same parsing analysis as Calcite 1.36. Any changes made
on top of the Parser.jj file or the config.fmpp file in the future are Impala
specific changes, so a diff can be done from this commit to see all the Impala
parsing changes.

The config.fmpp file was grabbed from Calcite 1.36 default_config.fmpp. The
Calcite intention of the config.fmpp file is to allow markup of variables in
the Parser.jj file. So it is always preferable to modify the
default_config.fmpp file when possible. Our version is grabbed from
https://github.com/apache/calcite/blob/main/core/src/main/codegen/config.fmpp
and slightly modified with the class name to make it compile for Impala.

There's no unit test needed since there is no functional change. The Calcite
planner will eventually make changes in the ".jj" file to support the 
differences
between the Impala parser and the Calcite parser.
Change-Id: If756b5ea8beb85661a30fb5d029e74ebb6719767
Reviewed-on: http://gerrit.cloudera.org:8080/21194
Tested-by: Impala Public Jenkins 
Reviewed-by: Joe McDonnell 


> Import parser files from Calcite into Impala
> 
>
> Key: IMPALA-12934
> URL: https://issues.apache.org/jira/browse/IMPALA-12934
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Steve Carlin
>Priority: Major
>
> Since the Impala sql syntax is different from the Calcite sql syntax, Impala 
> needs it's own parsing files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13058) TestRuntimeFilters.test_basic_filters failed on ARM builds

2024-05-08 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844689#comment-17844689
 ] 

ASF subversion and git services commented on IMPALA-13058:
--

Commit fdb87a755a2ac581b3144f1506b6e901ec8d2da2 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=fdb87a755 ]

IMPALA-13058: Init first_arrival_time_ and completion_time_ with -1

Impala run over ARM machine shows 'arch_sys_counter' clock source being
used rather than more precise 'tsc'. This cause
MonotonicStopwatch::Now() to use 'CLOCK_MONOTONIC_COARSE' rather than
'CLOCK_MONOTONIC'. This is what printed near the beginning of impalad
log:

I0506 13:49:15.429359 355337 init.cc:600] OS distribution: Red Hat Enterprise 
Linux 8.8 (Ootpa)
OS version: Linux version 4.18.0-477.15.1.el8_8.aarch64 ...
Clock: clocksource: 'arch_sys_counter', clockid_t: CLOCK_MONOTONIC_COARSE

This difference in clock source causes test failure in
test_runtime_filters.py::TestRuntimeFilters::test_basic_filters. This
patch fixes the issue by initializing first_arrival_time_ and
completion_time_ fields of Coordinator::FilterState with -1 and accept 0
as valid value for those fields.

query_events_ initialization and start are also moved to the beginning
of ClientRequestState's contructor.

Testing:
- Tweak row_regex pattern in runtime_filters.test.
- Loop and pass test_runtime_filters.py in exhaustive mode 3 times
  in ARM machine.

Change-Id: I1176e2118bb03414ab35049f50009ff0e8c63f58
Reviewed-on: http://gerrit.cloudera.org:8080/21405
Reviewed-by: Wenzhe Zhou 
Tested-by: Impala Public Jenkins 


> TestRuntimeFilters.test_basic_filters failed on ARM builds
> --
>
> Key: IMPALA-13058
> URL: https://issues.apache.org/jira/browse/IMPALA-13058
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Wenzhe Zhou
>Assignee: Riza Suminto
>Priority: Major
>
> Stacktrace:
> query_test/test_runtime_filters.py:98: in test_basic_filters
> self.run_test_case('QueryTest/runtime_filters', vector, 
> test_file_vars=vars)
> /data/jenkins/workspace/impala-cdw-master-exhaustive-release-arm/repos/Impala/tests/common/impala_test_suite.py:851:
>  in run_test_case
> update_section=pytest.config.option.update_results)
> /data/jenkins/workspace/impala-cdw-master-exhaustive-release-arm/repos/Impala/tests/common/test_result_verifier.py:701:
>  in verify_runtime_profile
> actual))
> E   AssertionError: Did not find matches for lines in runtime profile:
> E   EXPECTED LINES:
> E   row_regex: .*REMOTE.*ms.*ms.*true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13061) Query Live table fails to load if default_transactional_type=insert_only set globally

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844535#comment-17844535
 ] 

ASF subversion and git services commented on IMPALA-13061:
--

Commit 1233ac3c579b5929866dba23debae63e5d2aae90 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=1233ac3c5 ]

IMPALA-13061: Create query live as external table

Impala determines whether a managed table is transactional based on the
'transactional' table property. It assumes any managed table with
transactional=true returns non-null getValidWriteIds.

When 'default_transactional_type=insert_only' is set at startup (via
default_query_options), impala_query_live is created as a managed table
with transactional=true, but SystemTables don't implement
getValidWriteIds and are not meant to be transactional.

DataSourceTable has a similar problem, and when a JDBC table is
created setJdbcDataSourceProperties sets transactional=false. This
patch uses CREATE EXTERNAL TABLE sys.impala_Query_live so that it is not
created as a managed table and 'transactional' is not set. That avoids
creating a SystemTable that Impala can't read (it encounters an
IllegalStateException).

Change-Id: Ie60a2bd03fabc63c85bcd9fa2489e9d47cd2aa65
Reviewed-on: http://gerrit.cloudera.org:8080/21401
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Query Live table fails to load if default_transactional_type=insert_only set 
> globally
> -
>
> Key: IMPALA-13061
> URL: https://issues.apache.org/jira/browse/IMPALA-13061
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Critical
>
> If transactional type defaults to insert_only for all queries via
> {code}
> --default_query_options=default_transactional_type=insert_only
> {code}
> the table definition for {{sys.impala_query_live}} is set to transactional, 
> which causes an exception in catalogd
> {code}
> I0506 22:07:42.808758  3972 jni-util.cc:302] 
> 4547b965aeebc5f0:8ba96c58] java.lang.IllegalStateException
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:496)
> at org.apache.impala.catalog.Table.getPartialInfo(Table.java:851)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.doGetPartialCatalogObject(CatalogServiceCatalog.java:3818)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3714)
> at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3681)
> at 
> org.apache.impala.service.JniCatalog.lambda$getPartialCatalogObject$10(JniCatalog.java:431)
> at 
> org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
> at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
> at 
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
> at 
> org.apache.impala.service.JniCatalogOp.execAndSerializeSilentStartAndFinish(JniCatalogOp.java:109)
> at 
> org.apache.impala.service.JniCatalog.execAndSerializeSilentStartAndFinish(JniCatalog.java:253)
> at 
> org.apache.impala.service.JniCatalog.getPartialCatalogObject(JniCatalog.java:430)
> {code}
> We need to override that setting while creating {{sys.impala_query_live}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12977) add search and pagination to /hadoop-varz

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844281#comment-17844281
 ] 

ASF subversion and git services commented on IMPALA-12977:
--

Commit 74960cca6f8edbda1cca4fbed43a2c75a89a690d in impala's branch 
refs/heads/branch-4.4.0 from Saurabh Katiyal
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=74960cca6 ]

IMPALA-12977: add search and pagination to /hadoop-varz

Added search and pagination feature to /hadoop-varz

Change-Id: Ic8cac23b655fa58ce12d9857649705574614a5f0
Reviewed-on: http://gerrit.cloudera.org:8080/21329
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 7c98ebb7b149a03a1fdb10f0da124c3fd2265f5d)


>  add search and pagination to /hadoop-varz
> --
>
> Key: IMPALA-12977
> URL: https://issues.apache.org/jira/browse/IMPALA-12977
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Saurabh Katiyal
>Assignee: Saurabh Katiyal
>Priority: Minor
>  Labels: newbie
> Fix For: Impala 4.4.0
>
>
> /hadoop-varz has 2000+ configurations , It'd be nice to have some of the 
> tools like search and pagination that we get on /varz (flags.tmpl)
> existing template to /hadoop-varz:
> [https://github.com/apache/impala/blob/ba4cb95b6251911fa9e057cea1cb37958d339fed/www/hadoop-varz.tmpl]
> existing template to /varz:
> [https://github.com/apache/impala/blob/ba4cb95b6251911fa9e057cea1cb37958d339fed/www/flags.tmpl]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13009) Potential leak of partition deletions in the catalog topic

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844278#comment-17844278
 ] 

ASF subversion and git services commented on IMPALA-13009:
--

Commit 5d32919f46117213249c60574f77e3f9bb66ed90 in impala's branch 
refs/heads/branch-4.4.0 from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5d32919f4 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit ee21427d26620b40d38c706b4944d2831f84f6f5)


> Potential leak of partition deletions in the catalog topic
> --
>
> Key: IMPALA-13009
> URL: https://issues.apache.org/jira/browse/IMPALA-13009
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> Catalogd might not send partition deletions to the catalog topic in the 
> following scenario:
> * Some partitions of a table are dropped.
> * The HdfsTable object is removed sequentially before catalogd collects the 
> dropped partitions.
> In such case, catalogd loses track of the dropped partitions so their updates 
> keep existing in the catalog topic, until the partition names are reused 
> again.
> Note that the HdfsTable object can be removed by commands like DropTable or 
> INVALIDATE.
> The leaked partitions will be detected when a coordinator restarts. An 
> IllegalStateException complaining stale partitions will be reported, causing 
> the table not being added to the catalog cache of coordinator.
> {noformat}
> E0417 16:41:22.317298 20746 ImpaladCatalog.java:264] Error adding catalog 
> object: Received stale partition in a statestore update: 
> THdfsPartition(partitionKeyExprs:[TExpr(nodes:[TExprNode(node_type:INT_LITERAL,
>  type:TColumnType(types:[TTypeNode(type:SCALAR, 
> scalar_type:TScalarType(type:INT))]), num_children:0, is_constant:true, 
> int_literal:TIntLiteral(value:106), is_codegen_disabled:false)])], 
> location:THdfsPartitionLocation(prefix_index:0, suffix:p=106), id:138, 
> file_desc:[THdfsFileDesc(file_desc_data:18 00 00 00 00 00 00 00 00 00 0E 00 
> 1C 00 18 00 10 00 00 00 08 00 04 00 0E 00 00 00 18 00 00 00 8B 0E 2D EB 8E 01 
> 00 00 04 00 00 00 00 00 00 00 0C 00 00 00 01 00 00 00 4C 00 00 00 36 00 00 00 
> 34 34 34 37 62 35 66 34 62 30 65 64 66 64 65 31 2D 32 33 33 61 64 62 38 35 30 
> 30 30 30 30 30 30 30 5F 36 36 34 31 30 39 33 37 33 5F 64 61 74 61 2E 30 2E 74 
> 78 74 00 00 0C 00 14 00 00 00 0C 00...)], 

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844280#comment-17844280
 ] 

ASF subversion and git services commented on IMPALA-12831:
--

Commit 5d32919f46117213249c60574f77e3f9bb66ed90 in impala's branch 
refs/heads/branch-4.4.0 from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5d32919f4 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit ee21427d26620b40d38c706b4944d2831f84f6f5)


> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:

[jira] [Commented] (IMPALA-3127) Decouple partitions from tables

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844279#comment-17844279
 ] 

ASF subversion and git services commented on IMPALA-3127:
-

Commit 5d32919f46117213249c60574f77e3f9bb66ed90 in impala's branch 
refs/heads/branch-4.4.0 from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5d32919f4 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit ee21427d26620b40d38c706b4944d2831f84f6f5)


> Decouple partitions from tables
> ---
>
> Key: IMPALA-3127
> URL: https://issues.apache.org/jira/browse/IMPALA-3127
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Affects Versions: Impala 2.2.4
>Reporter: Dimitris Tsirogiannis
>Assignee: Quanlong Huang
>Priority: Major
>  Labels: catalog-server, performance
> Fix For: Impala 4.0.0
>
>
> Currently, partitions are tightly integrated into the HdfsTable objects, 
> making incremental metadata updates difficult to perform. Furthermore, the 
> catalog transmits entire table metadata even when only few partitions change, 
> introducing significant latencies, wasting network bandwidth and CPU cycles 
> while updating table metadata at the receiving impalads. As a first step, we 
> should decouple partitions from tables and add them as a separate level in 
> the hierarchy of catalog entities (server-db-table-partition). Subsequently, 
> the catalog should transmit only entities that have changed after DDL/DML 
> statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13009) Potential leak of partition deletions in the catalog topic

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844099#comment-17844099
 ] 

ASF subversion and git services commented on IMPALA-13009:
--

Commit ee21427d26620b40d38c706b4944d2831f84f6f5 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ee21427d2 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Potential leak of partition deletions in the catalog topic
> --
>
> Key: IMPALA-13009
> URL: https://issues.apache.org/jira/browse/IMPALA-13009
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> Catalogd might not send partition deletions to the catalog topic in the 
> following scenario:
> * Some partitions of a table are dropped.
> * The HdfsTable object is removed sequentially before catalogd collects the 
> dropped partitions.
> In such case, catalogd loses track of the dropped partitions so their updates 
> keep existing in the catalog topic, until the partition names are reused 
> again.
> Note that the HdfsTable object can be removed by commands like DropTable or 
> INVALIDATE.
> The leaked partitions will be detected when a coordinator restarts. An 
> IllegalStateException complaining stale partitions will be reported, causing 
> the table not being added to the catalog cache of coordinator.
> {noformat}
> E0417 16:41:22.317298 20746 ImpaladCatalog.java:264] Error adding catalog 
> object: Received stale partition in a statestore update: 
> THdfsPartition(partitionKeyExprs:[TExpr(nodes:[TExprNode(node_type:INT_LITERAL,
>  type:TColumnType(types:[TTypeNode(type:SCALAR, 
> scalar_type:TScalarType(type:INT))]), num_children:0, is_constant:true, 
> int_literal:TIntLiteral(value:106), is_codegen_disabled:false)])], 
> location:THdfsPartitionLocation(prefix_index:0, suffix:p=106), id:138, 
> file_desc:[THdfsFileDesc(file_desc_data:18 00 00 00 00 00 00 00 00 00 0E 00 
> 1C 00 18 00 10 00 00 00 08 00 04 00 0E 00 00 00 18 00 00 00 8B 0E 2D EB 8E 01 
> 00 00 04 00 00 00 00 00 00 00 0C 00 00 00 01 00 00 00 4C 00 00 00 36 00 00 00 
> 34 34 34 37 62 35 66 34 62 30 65 64 66 64 65 31 2D 32 33 33 61 64 62 38 35 30 
> 30 30 30 30 30 30 30 5F 36 36 34 31 30 39 33 37 33 5F 64 61 74 61 2E 30 2E 74 
> 78 74 00 00 0C 00 14 00 00 00 0C 00...)], access_level:READ_WRITE, 
> stats:TTableStats(num_rows:-1), is_marked_cached:false, 
> 

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844101#comment-17844101
 ] 

ASF subversion and git services commented on IMPALA-12831:
--

Commit ee21427d26620b40d38c706b4944d2831f84f6f5 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ee21427d2 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:
> java.util.ConcurrentModificationException
>   at 

[jira] [Commented] (IMPALA-3127) Decouple partitions from tables

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844100#comment-17844100
 ] 

ASF subversion and git services commented on IMPALA-3127:
-

Commit ee21427d26620b40d38c706b4944d2831f84f6f5 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ee21427d2 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Decouple partitions from tables
> ---
>
> Key: IMPALA-3127
> URL: https://issues.apache.org/jira/browse/IMPALA-3127
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Affects Versions: Impala 2.2.4
>Reporter: Dimitris Tsirogiannis
>Assignee: Quanlong Huang
>Priority: Major
>  Labels: catalog-server, performance
> Fix For: Impala 4.0.0
>
>
> Currently, partitions are tightly integrated into the HdfsTable objects, 
> making incremental metadata updates difficult to perform. Furthermore, the 
> catalog transmits entire table metadata even when only few partitions change, 
> introducing significant latencies, wasting network bandwidth and CPU cycles 
> while updating table metadata at the receiving impalads. As a first step, we 
> should decouple partitions from tables and add them as a separate level in 
> the hierarchy of catalog entities (server-db-table-partition). Subsequently, 
> the catalog should transmit only entities that have changed after DDL/DML 
> statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12977) add search and pagination to /hadoop-varz

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843870#comment-17843870
 ] 

ASF subversion and git services commented on IMPALA-12977:
--

Commit 7c98ebb7b149a03a1fdb10f0da124c3fd2265f5d in impala's branch 
refs/heads/master from Saurabh Katiyal
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=7c98ebb7b ]

IMPALA-12977: add search and pagination to /hadoop-varz

Added search and pagination feature to /hadoop-varz

Change-Id: Ic8cac23b655fa58ce12d9857649705574614a5f0
Reviewed-on: http://gerrit.cloudera.org:8080/21329
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


>  add search and pagination to /hadoop-varz
> --
>
> Key: IMPALA-12977
> URL: https://issues.apache.org/jira/browse/IMPALA-12977
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Saurabh Katiyal
>Assignee: Saurabh Katiyal
>Priority: Minor
>  Labels: newbie
>
> /hadoop-varz has 2000+ configurations , It'd be nice to have some of the 
> tools like search and pagination that we get on /varz (flags.tmpl)
> existing template to /hadoop-varz:
> [https://github.com/apache/impala/blob/ba4cb95b6251911fa9e057cea1cb37958d339fed/www/hadoop-varz.tmpl]
> existing template to /varz:
> [https://github.com/apache/impala/blob/ba4cb95b6251911fa9e057cea1cb37958d339fed/www/flags.tmpl]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13012) Completed queries write fails regularly under heavy load

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843794#comment-17843794
 ] 

ASF subversion and git services commented on IMPALA-13012:
--

Commit 73f13f0e9f225400bb641c48da57a9871b5d8383 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=73f13f0e9 ]

IMPALA-13012: Lower default query_log_max_queued

Sets the query_log_max_queued default such that

  query_log_max_queued * num_columns(49) < statement_expression_limit

to avoid triggering e.g.

  AnalysisException: Exceeded the statement expression limit (25)
  Statement has 370039 expressions.

Also increases statement_expression_limit for insertion to avoid an
error if query_log_max_queued is changed.

Logs time taken to write to the queries table for help with debugging
and adds histogram "impala-server.completed-queries.write-durations".

Fixes InternalServer so it uses 'default_query_options'.

Change-Id: I6535675307d88cb65ba7d908f3c692e0cf3259d7
Reviewed-on: http://gerrit.cloudera.org:8080/21351
Reviewed-by: Michael Smith 
Tested-by: Michael Smith 
Reviewed-by: Riza Suminto 
(cherry picked from commit ba32d70891fd68c5c1234ed543b74c51661bf272)


> Completed queries write fails regularly under heavy load
> 
>
> Key: IMPALA-13012
> URL: https://issues.apache.org/jira/browse/IMPALA-13012
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> Under heavy test load (running EE tests), Impala regularly fails to write 
> completed queries with errors like
> {code}
> W0411 19:11:07.764967 32713 workload-management.cc:435] failed to write 
> completed queries table="sys.impala_query_log" record_count="10001"
> W0411 19:11:07.764983 32713 workload-management.cc:437] AnalysisException: 
> Exceeded the statement expression limit (25)
> Statement has 370039 expressions.
> {code}
> After a few attempts, it floods logs with an error for each query that could 
> not be written
> {code}
> E0411 19:11:24.646953 32713 workload-management.cc:376] could not write 
> completed query table="sys.impala_query_log" 
> query_id="3142ceb1380b58e6:715b83d9"
> {code}
> This seems like poor default behavior. Options for addressing it:
> # Decrease the default for {{query_log_max_queued}}. Inserts are pretty 
> constant at 37 expressions per entry. I'm not sure why that isn't 49, since 
> that's the number of columns we have; maybe some fields are frequently 
> omitted. I would cap {{query_log_max_queued}} to {{statement_expression_limit 
> / number_of_columns ~ 5100}}.
> # Allow workload management to {{set statement_expression_limit}} higher 
> using a similar formula. This may be relatively safe as the expressions are 
> simple.
> # Ideally we would skip expression parsing and construct TExecRequest 
> directly, but that's a much larger effort.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13005) Query Live table not visible via metastore

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843793#comment-17843793
 ] 

ASF subversion and git services commented on IMPALA-13005:
--

Commit 91a9dd81ccd38e2e4b975e4ddf49f97c3e01b051 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=91a9dd81c ]

IMPALA-13005: Create Query Live table in HMS

Creates the 'sys.impala_query_live' table in HMS using a similar 'CREATE
TABLE' command to 'sys.impala_query_log'. Updates frontend to identify a
System Table based on the '__IMPALA_SYSTEM_TABLE' property. Tables
improperly marked with '__IMPALA_SYSTEM_TABLE' will error when
attempting to scan them because no relevant scanner will be available.

Creating the table in HMS simplifies supporting 'SHOW CREATE TABLE' and
'DESCRIBE EXTENDED', so allows them for parity with Query Log.
Explicitly disables 'COMPUTE STATS' on system tables as it doesn't work
correctly.

Makes System Tables work with local catalog mode, fixing

  LocalCatalogException: Unknown table type for table sys.impala_query_live

Updates workload management implementation to rely more on
SystemTables.thrift definition, and adds DCHECKs to verify completeness
and ordering.

Testing:
- adds additional test cases for changes to introspection commands
- passes existing test_query_live and test_query_log suites

Change-Id: Idf302ee54a819fdee2db0ae582a5eeddffe4a5b4
Reviewed-on: http://gerrit.cloudera.org:8080/21302
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 73a9ef9c4c00419b5082f355d96961c5971634d6)


> Query Live table not visible via metastore
> --
>
> Key: IMPALA-13005
> URL: https://issues.apache.org/jira/browse/IMPALA-13005
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> Tools that inspect Hive Metastore to view information about tables are unable 
> to see {{sys.impala_query_live}} when {{enable_workload_mgmt}} is enabled. 
> This presents an inconsistent view depending on whether inspecting tables via 
> Impala or Hive MetaStore.
> Register {{sys.impala_query_live}} with Hive MetaStore, similar to what 
> {{sys.impala_query_log}} does, to make table listing consistent.
> It also doesn't work with local catalog mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13024) Several tests timeout waiting for admission

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843792#comment-17843792
 ] 

ASF subversion and git services commented on IMPALA-13024:
--

Commit 7ca9798949ffd31b703c610e4eca20d0e0181ab3 in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=7ca979894 ]

IMPALA-13024: Ignore slots if using default pool and empty group

Slot based admission should not be enabled when using default pool.
There is a bug where coordinator-only query still does slot based
admission because executor group name set to
ClusterMembershipMgr::EMPTY_GROUP_NAME ("empty group (using coordinator
only)"). This patch adds check to recognize coordinator-only query in
default pool and skip slot based admission for it.

Testing:
- Add BE test AdmissionControllerTest.CanAdmitRequestSlotsDefault.
- In test_executor_groups.py, split test_coordinator_concurrency to
  test_coordinator_concurrency_default and
  test_coordinator_concurrency_two_exec_group_cluster to show the
  behavior change.
- Pass core tests in ASAN build.

Change-Id: I0b08dea7ba0c78ac6b98c7a0b148df8fb036c4d0
Reviewed-on: http://gerrit.cloudera.org:8080/21340
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit 29e418679380591ea7b8e4bdb797ccdbb0964a3d)


> Several tests timeout waiting for admission
> ---
>
> Key: IMPALA-13024
> URL: https://issues.apache.org/jira/browse/IMPALA-13024
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Csaba Ringhofer
>Assignee: Riza Suminto
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> A bunch of seemingly unrelated tests failed with the following message:
> Example: 
> query_test.test_spilling.TestSpillingDebugActionDimensions.test_spilling_aggs[protocol:
>  beeswax | exec_option: {'mt_dop': 1, 'debug_action': None, 
> 'default_spillable_buffer_size': '256k'} | table_format: parquet/none] 
> {code}
> ImpalaBeeswaxException: EQuery aborted:Admission for query exceeded 
> timeout 6ms in pool default-pool. Queued reason: Not enough admission 
> control slots available on host ... . Needed 1 slots but 18/16 are already in 
> use. Additional Details: Not Applicable
> {code}
> This happened in an ASAN build. Another test also failed which may be related 
> to the cause:
> custom_cluster.test_admission_controller.TestAdmissionController.test_queue_reasons_slots
>  
> {code}
> Timeout: query 'e1410add778cd7b0:c40812b9' did not reach one of the 
> expected states [4], last known state 5
> {code}
> test_queue_reasons_slots seems to be know flaky test: IMPALA-10338



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13015) Dataload fails due to concurrency issue with test.jceks

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843791#comment-17843791
 ] 

ASF subversion and git services commented on IMPALA-13015:
--

Commit 5045f19b5374678c10888376955f2ff5e360ae5b in impala's branch 
refs/heads/branch-4.4.0 from Abhishek Rawat
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5045f19b5 ]

IMPALA-13015: Dataload fails due to concurrency issue with test.jceks

Move 'hadoop credential' command used for creating test.jceks to
testdata/bin/create-load-data.sh. Earlier it was in bin/load-data.py
which is called in parallel and was causing failures due to race
conditions.

Testing:
- Ran JniFrontendTest#testGetSecretFromKeyStore after data loading and
test ran clean.

Change-Id: I7fbeffc19f2b78c19fee9acf7f96466c8f4f9bcd
Reviewed-on: http://gerrit.cloudera.org:8080/21346
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit f620e5d5c0bbdb0fd97bac31c7b7439cd13c6d08)


> Dataload fails due to concurrency issue with test.jceks
> ---
>
> Key: IMPALA-13015
> URL: https://issues.apache.org/jira/browse/IMPALA-13015
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 4.4.0
>Reporter: Joe McDonnell
>Assignee: Abhishek Rawat
>Priority: Major
>  Labels: flaky
>
> When doing dataload locally, it fails with this error:
> {noformat}
> Traceback (most recent call last):
>   File "/home/joemcdonnell/upstream/Impala/bin/load-data.py", line 523, in 
> 
>     if __name__ == "__main__": main()
>   File "/home/joemcdonnell/upstream/Impala/bin/load-data.py", line 322, in 
> main
>     os.remove(jceks_path)
> OSError: [Errno 2] No such file or directory: 
> '/home/joemcdonnell/upstream/Impala/testdata/jceks/test.jceks'
> Background task Loading functional-query data (pid 501094) failed.
> {noformat}
> testdata/bin/create-load-data.sh calls bin/load-data.py for functional, 
> TPC-H, and TPC-DS in parallel, so this logic has race conditions:
> {noformat}
>   jceks_path = TESTDATA_JCEKS_DIR + "/test.jceks"
>   if os.path.exists(jceks_path):
>     os.remove(jceks_path){noformat}
> I don't see a specific reason for this to be in bin/load-data.py. It should 
> be moved somewhere else that doesn't run in parallel. One possible location 
> is to add a step in testdata/bin/create-load-data.sh
> This was introduced in 
> [https://github.com/apache/impala/commit/9837637d9342a49288a13a421d4e749818da1432]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13035) Querying metadata tables from non-Iceberg tables throws IllegalArgumentException

2024-05-04 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843490#comment-17843490
 ] 

ASF subversion and git services commented on IMPALA-13035:
--

Commit f75745e9bbf289ffb9afe7baee882b5af4304d5b in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f75745e9b ]

IMPALA-13035: Querying metadata tables from non-Iceberg tables throws 
IllegalArgumentException

When attempting to query a metadata table of a non-Iceberg table the
analyzer throws 'IllegalArgumentException'.

The problem is that 'IcebergMetadataTable.isIcebergMetadataTable()'
doesn't actually check whether the given path belongs to a valid
metadata table, it only checks whether the path could syntactically
refer to one. This is because it is called in
'Path.getCandidateTables()', at which point analysis has not been done
yet.

However, 'IcebergMetadataTable.isIcebergMetadataTable()' is also called
in 'Analyzer.getTable()'. If 'isIcebergMetadataTable()' returns true,
'getTable()' tries to instantiate an 'IcebergMetadataTable' object with
the table ref of the base table. If that table is not an Iceberg table,
a precondition check fails.

This change renames 'isIcebergMetadataTable()' to
'canBeIcebergMetadataTable()' and adds a new 'isIcebergMetadataTable()'
function, which also takes an 'Analyzer' as a parameter. With the help
of the 'Analyzer' it is possible to determine whether the base table is
an Iceberg table. 'Analyzer.getTable()' then uses this new
'isIcebergMetadataTable()' function instead of
canBeIcebergMetadataTable().

The constructor of 'IcebergMetadataTable' is also modified to take an
'FeIcebergTable' as the parameter for the base table instead of a
general 'FeTable'.

Testing:
 - Added a test query in iceberg-metadata-tables.test.

Change-Id: Ia7c25ed85a8813011537c73f0aaf72db1501f9ef
Reviewed-on: http://gerrit.cloudera.org:8080/21361
Tested-by: Impala Public Jenkins 
Reviewed-by: Peter Rozsa 


> Querying metadata tables from non-Iceberg tables throws 
> IllegalArgumentException
> 
>
> Key: IMPALA-13035
> URL: https://issues.apache.org/jira/browse/IMPALA-13035
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.3.0
>Reporter: Peter Rozsa
>Assignee: Daniel Becker
>Priority: Minor
>  Labels: impala-iceberg
>
> If a query targets an Iceberg metadata table like default.xy.`files` and the 
> xy table is not an Iceberg table then the analyzer throws 
> IllegalArgumentException. 
> The main concern is that IcebergMetadataTable.java:isIcebergMetadataTable is 
> called before it's validated that the table is indeed an IcebergTable.
> Example: 
> {code:java}
> create table xy(a int);
> select * from default.xy.`files`;{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13031) Enhancing logging for spilling configuration with local buffer directory details

2024-05-02 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842850#comment-17842850
 ] 

ASF subversion and git services commented on IMPALA-13031:
--

Commit 5b70e48ebbaac234338f2077ce6505f35b2d37aa in impala's branch 
refs/heads/master from Yida Wu
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5b70e48eb ]

IMPALA-13031: Enhancing logging for spilling configuration with local buffer 
directory details

The patch adds logging for local buffer directory when using
remote scratch space. The printed log would be like
"Using local buffer directory for scratch space
/tmp/test/impala-scratch on disk 8 limit: 500.00 MB,
priority: 2147483647".

Tests:
Manally tests the logging working as described.

Change-Id: I8fb357016d72a363ee5016f7881b0f6b0426aff5
Reviewed-on: http://gerrit.cloudera.org:8080/21350
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Enhancing logging for spilling configuration with local buffer directory 
> details
> 
>
> Key: IMPALA-13031
> URL: https://issues.apache.org/jira/browse/IMPALA-13031
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Yida Wu
>Assignee: Yida Wu
>Priority: Major
>
> When setting up spilling to a remote location, it also needs to configure a 
> local buffer directory within the scratch directories. However, the current 
> logging does not include any information about the local buffer directory, 
> such as its capacity or other relevant details. It would be beneficial to log 
> this information for better tracking of potential issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12910) Run TPCH/TPCDS queries for external JDBC tables

2024-05-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842837#comment-17842837
 ] 

ASF subversion and git services commented on IMPALA-12910:
--

Commit 08f8a300250df7b4f9a517cdb6bab48c379b7e03 in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=08f8a3002 ]

IMPALA-12910: Support running TPCH/TPCDS queries for JDBC tables

This patch adds script to create external JDBC tables for the dataset of
TPCH and TPCDS, and adds unit-tests to run TPCH and TPCDS queries for
external JDBC tables with Impala-Impala federation. Note that JDBC
tables are mapping tables, they don't take additional disk spaces.
It fixes the race condition when caching of SQL DataSource objects by
using a new DataSourceObjectCache class, which checks reference count
before closing SQL DataSource.
Adds a new query-option 'clean_dbcp_ds_cache' with default value as
true. When it's set as false, SQL DataSource object will not be closed
when its reference count equals 0 and will be kept in cache until
the SQL DataSource is idle for more than 5 minutes. Flag variable
'dbcp_data_source_idle_timeout_s' is added to make the duration
configurable.
java.sql.Connection.close() fails to remove a closed connection from
connection pool sometimes, which causes JDBC working threads to wait
for available connections from the connection pool for a long time.
The work around is to call BasicDataSource.invalidateConnection() API
to close a connection.
Two flag variables are added for DBCP configuration properties
'maxTotal' and 'maxWaitMillis'. Note that 'maxActive' and 'maxWait'
properties are renamed to 'maxTotal' and 'maxWaitMillis' respectively
in apache.commons.dbcp v2.
Fixes a bug for database type comparison since the type strings
specified by user could be lower case or mix of upper/lower cases, but
the code compares the types with upper case string.
Fixes issue to close SQL DataSource object in JdbcDataSource.open()
and JdbcDataSource.getNext() when some errors returned from DBCP APIs
or JDBC drivers.

testdata/bin/create-tpc-jdbc-tables.py supports to create JDBC tables
for Impala-Impala, Postgres and MySQL.
Following sample commands creates TPCDS JDBC tables for Impala-Impala
federation with remote coordinator running at 10.19.10.86, and Postgres
server running at 10.19.10.86:
  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=IMPALA --database_host=10.19.10.86 --clean

  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=POSTGRES --database_host=10.19.10.86 \
--database_name=tpcds --clean

TPCDS tests for JDBC tables run only for release/exhaustive builds.
TPCH tests for JDBC tables run for core and exhaustive builds, except
Dockerized builds.

Remaining Issues:
 - tpcds-decimal_v2-q80a failed with returned rows not matching expected
   results for some decimal values. This will be fixed in IMPALA-13018.

Testing:
 - Passed core tests.
 - Passed query_test/test_tpcds_queries.py in release/exhaustive build.
 - Manually verified that only one SQL DataSource object was created for
   test_tpcds_queries.py::TestTpcdsQueryForJdbcTables since query option
   'clean_dbcp_ds_cache' was set as false, and the SQL DataSource object
   was closed by cleanup thread.

Change-Id: I44e8c1bb020e90559c7f22483a7ab7a151b8f48a
Reviewed-on: http://gerrit.cloudera.org:8080/21304
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 


> Run TPCH/TPCDS queries for external JDBC tables
> ---
>
> Key: IMPALA-12910
> URL: https://issues.apache.org/jira/browse/IMPALA-12910
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Perf Investigation
>Reporter: Wenzhe Zhou
>Assignee: Wenzhe Zhou
>Priority: Major
>
> Need performance data for queries on external JDBC tables to be documented in 
> the design doc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13018) Fix test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcdstpcds-decimal_v2-q80a failure

2024-05-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842838#comment-17842838
 ] 

ASF subversion and git services commented on IMPALA-13018:
--

Commit 08f8a300250df7b4f9a517cdb6bab48c379b7e03 in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=08f8a3002 ]

IMPALA-12910: Support running TPCH/TPCDS queries for JDBC tables

This patch adds script to create external JDBC tables for the dataset of
TPCH and TPCDS, and adds unit-tests to run TPCH and TPCDS queries for
external JDBC tables with Impala-Impala federation. Note that JDBC
tables are mapping tables, they don't take additional disk spaces.
It fixes the race condition when caching of SQL DataSource objects by
using a new DataSourceObjectCache class, which checks reference count
before closing SQL DataSource.
Adds a new query-option 'clean_dbcp_ds_cache' with default value as
true. When it's set as false, SQL DataSource object will not be closed
when its reference count equals 0 and will be kept in cache until
the SQL DataSource is idle for more than 5 minutes. Flag variable
'dbcp_data_source_idle_timeout_s' is added to make the duration
configurable.
java.sql.Connection.close() fails to remove a closed connection from
connection pool sometimes, which causes JDBC working threads to wait
for available connections from the connection pool for a long time.
The work around is to call BasicDataSource.invalidateConnection() API
to close a connection.
Two flag variables are added for DBCP configuration properties
'maxTotal' and 'maxWaitMillis'. Note that 'maxActive' and 'maxWait'
properties are renamed to 'maxTotal' and 'maxWaitMillis' respectively
in apache.commons.dbcp v2.
Fixes a bug for database type comparison since the type strings
specified by user could be lower case or mix of upper/lower cases, but
the code compares the types with upper case string.
Fixes issue to close SQL DataSource object in JdbcDataSource.open()
and JdbcDataSource.getNext() when some errors returned from DBCP APIs
or JDBC drivers.

testdata/bin/create-tpc-jdbc-tables.py supports to create JDBC tables
for Impala-Impala, Postgres and MySQL.
Following sample commands creates TPCDS JDBC tables for Impala-Impala
federation with remote coordinator running at 10.19.10.86, and Postgres
server running at 10.19.10.86:
  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=IMPALA --database_host=10.19.10.86 --clean

  ${IMPALA_HOME}/testdata/bin/create-tpc-jdbc-tables.py \
--jdbc_db_name=tpcds_jdbc --workload=tpcds \
--database_type=POSTGRES --database_host=10.19.10.86 \
--database_name=tpcds --clean

TPCDS tests for JDBC tables run only for release/exhaustive builds.
TPCH tests for JDBC tables run for core and exhaustive builds, except
Dockerized builds.

Remaining Issues:
 - tpcds-decimal_v2-q80a failed with returned rows not matching expected
   results for some decimal values. This will be fixed in IMPALA-13018.

Testing:
 - Passed core tests.
 - Passed query_test/test_tpcds_queries.py in release/exhaustive build.
 - Manually verified that only one SQL DataSource object was created for
   test_tpcds_queries.py::TestTpcdsQueryForJdbcTables since query option
   'clean_dbcp_ds_cache' was set as false, and the SQL DataSource object
   was closed by cleanup thread.

Change-Id: I44e8c1bb020e90559c7f22483a7ab7a151b8f48a
Reviewed-on: http://gerrit.cloudera.org:8080/21304
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 


> Fix 
> test_tpcds_queries.py/TestTpcdsQueryForJdbcTables.test_tpcdstpcds-decimal_v2-q80a
>  failure
> -
>
> Key: IMPALA-13018
> URL: https://issues.apache.org/jira/browse/IMPALA-13018
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Reporter: Wenzhe Zhou
>Assignee: Pranav Yogi Lodha
>Priority: Major
>
> The returned rows are not matching expected results for some decimal type of 
> columns. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13049) Add dependency management for the log4j2 version

2024-05-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842753#comment-17842753
 ] 

ASF subversion and git services commented on IMPALA-13049:
--

Commit d09c5024907aaf387aaa584dc86cb2b4d641a582 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d09c50249 ]

IMPALA-13049: Add dependency management for log4j2 to use 2.18.0

Currently, there is no dependency management for the log4j2
version. Impala itself doesn't use log4j2. However, recently
we encountered a case where one dependency brought in
log4-core 2.18.0 and another brought in log4j-api 2.17.1.
log4j-core 2.18.0 relies on the existence of the ServiceLoaderUtil
class from log4j-api 2.18.0. log4j-api 2.17.1 doesn't have this
class, which causes class not found exceptions.

This uses dependency management to set the log4j2 version to 2.18.0
for log4j-core and log4j-api to avoid any mismatch.

Testing:
 - Ran a local build and verified that both log4j-core and log4j-api
   are using 2.18.0.

Change-Id: Ib4f8485adadb90f66f354a5dedca29992c6d4e6f
Reviewed-on: http://gerrit.cloudera.org:8080/21379
Reviewed-by: Michael Smith 
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 


> Add dependency management for the log4j2 version
> 
>
> Key: IMPALA-13049
> URL: https://issues.apache.org/jira/browse/IMPALA-13049
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend, Infrastructure
>Affects Versions: Impala 4.4.0
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
>Priority: Critical
>
> In some internal builds, we see cases where one dependency brings in one 
> version of log4j2 and another brings in a different version on a different 
> artifact. In particular, we have seen cases where Hive brings in log4j-api 
> 2.17.1 while something else brings in log4j-core 2.18.0. This is a bad 
> combination, because log4j-core 2.18.0 relies on the ServiceLoaderUtil class 
> existing in log4j-api, but log4j-api 2.17.1 doesn't have it. This can result 
> in class not found exceptions.
> Impala itself uses reload4j rather than log4j2, so this is purely about 
> coordinating dependencies rather than Impala code.
> We should add dependency management for log4j-api and log4j-core. It makes 
> sense to standardize on 2.18.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13046) Rework Iceberg mixed format delete test for Hive optimization

2024-04-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842513#comment-17842513
 ] 

ASF subversion and git services commented on IMPALA-13046:
--

Commit 20f908b1ab3ce068b9ce14aa8f35a897f5cd0c88 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=20f908b1a ]

IMPALA-13046: Update Iceberg mixed format deletes test

Updates iceberg-mixed-format-position-deletes.test for HIVE-28069. Newer
versions of Hive will now remove a data file if a delete would negate
all rows in the data file to reduce the number of small files produced.
The test now ensures every data file it expects to produce will have a
row after delete (or circumvent the merge logic by using different
formats).

Change-Id: I87c23cc541983223c6b766372f4e582c33ae6836
Reviewed-on: http://gerrit.cloudera.org:8080/21373
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Rework Iceberg mixed format delete test for Hive optimization
> -
>
> Key: IMPALA-13046
> URL: https://issues.apache.org/jira/browse/IMPALA-13046
> Project: IMPALA
>  Issue Type: Task
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
> Fix For: Impala 4.x
>
>
> A Hive improvement (HIVE-27731) to Iceberg support changed Hive's behavior 
> around handling deletes. It used to always add a delete file, but now if a 
> delete would negate all the contents of a data file it instead removes the 
> data file. This breaks iceberg-mixed-format-position-deletes.test
> {code}
> query_test/test_iceberg.py:1472: in test_read_mixed_format_position_deletes
> vector, unique_database)
> common/impala_test_suite.py:820: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:627: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:520: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:290: in verify_query_result_is_subset
> unicode(expected_row), unicode(actual_results))
> E   AssertionError: Could not find expected row 
> row_regex:'hdfs://localhost:20500/test-warehouse/test_read_mixed_format_position_deletes_6fb8ae98.db/ice_mixed_formats/data/.*-data-.*.parquet','.*B','','.*'
>  in actual rows:
> E   
> 'hdfs://localhost:20500/test-warehouse/test_read_mixed_format_position_deletes_6fb8ae98.db/ice_mixed_formats/data/0-0-data-jenkins_20240427231939_2457cdfb-2e04-471a-9661-4551626f60ee-job_17142740646310_0071-4-1.orc','306B','','NONE'
> {code}
> as the only resulting data file is a .orc file containing the only valid row.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12684) Use IMPALA_COMPRESSED_DEBUG_INFO=true by default

2024-04-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842167#comment-17842167
 ] 

ASF subversion and git services commented on IMPALA-12684:
--

Commit 56f35ad40a7ded4a700222eef8dc99cf7ea44625 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=56f35ad40 ]

IMPALA-12684: Enable IMPALA_COMPRESSED_DEBUG_INFO by default

IMPALA_COMPRESSED_DEBUG_INFO was introduced in IMPALA-11511
and reduces Impala binary sizes by >50%. Debug tools like
gdb and our minidump processing scripts handle compressed
debug information properly. There are slightly higher link
times and additional overhead when doing debugging.

Overall, the reduction in binary sizes seems worth it given
the modest overhead. Compressing the debug information also
avoids concerns that adding debug information to toolchain
components would increase binary sizes.

This changes the default for IMPALA_COMPRESSED_DEBUG_INFO to
true.

Testing:
 - Ran pstack on a Centos 7 machine running tests with
   IMPALA_COMPRESSED_DEBUG_INFO=true and verified that
   the symbols work properly
 - Forced the production of minidumps for a job using
   IMPALA_COMPRESSED_DEBUG_INFO=true and verified it is
   processed properly.
 - Used this locally for development for several months

Change-Id: I31640f1453d351b11644bb46af3d2158b22af5b3
Reviewed-on: http://gerrit.cloudera.org:8080/20871
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Use IMPALA_COMPRESSED_DEBUG_INFO=true by default
> 
>
> Key: IMPALA-12684
> URL: https://issues.apache.org/jira/browse/IMPALA-12684
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 4.4.0
>Reporter: Joe McDonnell
>Priority: Major
>
> A large part of the Impala binary's size is debug information. The 
> IMPALA_COMPRESSED_DEBUG_INFO environment variable was introduced in 
> IMPALA-11511 and reduces the binary size by >50%. For developer environments, 
> this is can dramatically reduce disk space usage, especially when building 
> the backend tests. The downside is slower linking and slower processing of 
> the debuginfo (attaching with GDB, dumping symbols for minidumps).
> The disk space difference is very large, and this can help enable us to 
> expand debug information for the toolchain. We should switch the default 
> value to true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11511) Provide an option to build with compressed debug info

2024-04-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842168#comment-17842168
 ] 

ASF subversion and git services commented on IMPALA-11511:
--

Commit 56f35ad40a7ded4a700222eef8dc99cf7ea44625 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=56f35ad40 ]

IMPALA-12684: Enable IMPALA_COMPRESSED_DEBUG_INFO by default

IMPALA_COMPRESSED_DEBUG_INFO was introduced in IMPALA-11511
and reduces Impala binary sizes by >50%. Debug tools like
gdb and our minidump processing scripts handle compressed
debug information properly. There are slightly higher link
times and additional overhead when doing debugging.

Overall, the reduction in binary sizes seems worth it given
the modest overhead. Compressing the debug information also
avoids concerns that adding debug information to toolchain
components would increase binary sizes.

This changes the default for IMPALA_COMPRESSED_DEBUG_INFO to
true.

Testing:
 - Ran pstack on a Centos 7 machine running tests with
   IMPALA_COMPRESSED_DEBUG_INFO=true and verified that
   the symbols work properly
 - Forced the production of minidumps for a job using
   IMPALA_COMPRESSED_DEBUG_INFO=true and verified it is
   processed properly.
 - Used this locally for development for several months

Change-Id: I31640f1453d351b11644bb46af3d2158b22af5b3
Reviewed-on: http://gerrit.cloudera.org:8080/20871
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Provide an option to build with compressed debug info
> -
>
> Key: IMPALA-11511
> URL: https://issues.apache.org/jira/browse/IMPALA-11511
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 4.2.0
>Reporter: Joe McDonnell
>Assignee: Joe McDonnell
>Priority: Major
> Fix For: Impala 4.2.0
>
>
> For builds with debug information, the debug information is often a large 
> portion of the binary size. There is a feature that compresses the debug info 
> using ZLIB via the "-gz" compilation flag. It makes a very large difference 
> to the size of our binaries:
> {noformat}
> GCC 7.5:
> debug: 726767520
> debug with -gz: 325970776
> release: 707911496
> with with -gz: 301671026
> GCC 10.4:
> debug: 870378832
> debug with -gz: 351442253
> release: 974600536
> release with -gz: 367938487{noformat}
> The size reduction would be useful for developers, but support in other tools 
> is mixed. gdb has support and seems to work fine. breakpad does not have 
> support. However, it is easy to convert a binary with compressed debug 
> symbols to one with normal debug symbols using objcopy:
> {noformat}
> objcopy --decompress-debug-sections [in_binary] [out_binary]{noformat}
> Given a minidump, it is possible to run objcopy to decompress the debug 
> symbols for the original binary, dump the breakpad symbols, and then process 
> the minidump successfully. So, it should be possible to modify 
> bin/dump_breakpad_symbols.py to do this automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12997) test_query_log tests get stuck trying to write to the log

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841396#comment-17841396
 ] 

ASF subversion and git services commented on IMPALA-12997:
--

Commit 712a37bce461c233bc9e648550e82696bef0f224 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=712a37bce ]

IMPALA-12997: Use graceful shutdown for query log tests

Uses graceful shutdown for all tests that might insert into
'sys.impala_query_log' to avoid leaving the table locked in HMS by a
SIGTERM. That's primarily any test that lowers
'query_log_write_interval_s' or 'query_log_max_queued'.

Lowers grace period on test_query_log_table_flush_on_shutdown because
ShutdownWorkloadManagement() is not started until the grace period ends.

Updates "Adding/Removing local backend" to only apply to executors. It
was only added for executors, but would be removed on dedicated
coordinators as well (resulting in a DFATAL message during graceful
shutdown).

Change-Id: Ia123c53a952a77ff4a9c02736b5717ccaa3566dc
Reviewed-on: http://gerrit.cloudera.org:8080/21345
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_query_log tests get stuck trying to write to the log
> -
>
> Key: IMPALA-12997
> URL: https://issues.apache.org/jira/browse/IMPALA-12997
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
>
> In some test runs, most tests under test_query_log will start to fail on 
> various conditions like
> {code}
> custom_cluster/test_query_log.py:452: in 
> test_query_log_table_query_select_mt_dop
> "impala-server.completed-queries.written", 1, 60)
> common/impala_service.py:144: in wait_for_metric_value
> self.__metric_timeout_assert(metric_name, expected_value, timeout)
> common/impala_service.py:213: in __metric_timeout_assert
> assert 0, assert_string
> E   AssertionError: Metric impala-server.completed-queries.written did not 
> reach value 1 in 60s.
> E   Dumping debug webpages in JSON format...
> E   Dumped memz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/memz.json
> E   Dumped metrics JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/metrics.json
> E   Dumped queries JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/queries.json
> E   Dumped sessions JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/sessions.json
> E   Dumped threadz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/threadz.json
> E   Dumped rpcz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/rpcz.json
> E   Dumping minidumps for impalads/catalogds...
> E   Dumped minidump for Impalad PID 3680802
> E   Dumped minidump for Impalad PID 3680805
> E   Dumped minidump for Impalad PID 3680809
> E   Dumped minidump for Catalogd PID 3680732
> {code}
> or
> {code}
> custom_cluster/test_query_log.py:921: in test_query_log_ignored_sqls
> assert len(sql_results.data) == 1, "query not found in completed queries 
> table"
> E   AssertionError: query not found in completed queries table
> E   assert 0 == 1
> E+  where 0 = len([])
> E+where [] =  object at 0xa00cc350>.data
> {code}
> One symptom that seems related to this is INSERT operations into 
> sys.impala_query_log that start "UnregisterQuery()" but never finish (with 
> "Query successfully unregistered").
> We can identify cases like that with
> {code}
> for log in $(ag -l 'INSERT INTO sys.impala_query_log' impalad.*); do echo 
> $log; for qid in $(ag -o '[0-9a-f]*:[0-9a-f]*\] Analyzing query: INSERT INTO 
> sys.impala_query_log' $log | cut -d']' -f1); do if ! ag "Query successfully 
> unregistered: query_id=$qid" $log; then echo "$qid not unregistered"; fi; 
> done; done
> {code}
> A similar case may occur with creating the table too
> {code}
> for log in $(ag -l 'CREATE TABLE IF NOT EXISTS sys.impala_query_log' 
> impalad.impala-ec2-rhel88-m7g-4xlarge-ondemand-0a5a.vpc.cloudera.com.jenkins.log.INFO.20240410-*);
>  do QID=$(ag -o '[0-9a-f]*:[0-9a-f]*\] Analyzing query: INSERT INTO 
> sys.impala_query_log' $log | cut -d']' -f1); echo $log; ag "Query 
> successfully unregistered: query_id=$QID" $log; done
> {code}
> although these frequently fail because the test completes and shuts down 
> Impala before the CREATE TABLE query completes.
> Tracking one of those cases led to catalogd errors that repeated for 1m27s 
> before the test suite restarted catalogd:
> {code}
> W0410 12:48:05.051760 3681790 Tasks.java:456] 
> 6647229faf7637d5:3ec7565b] Retrying task after failure: Waiting for 
> lock on table 

[jira] [Commented] (IMPALA-13012) Completed queries write fails regularly under heavy load

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841385#comment-17841385
 ] 

ASF subversion and git services commented on IMPALA-13012:
--

Commit ba32d70891fd68c5c1234ed543b74c51661bf272 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ba32d7089 ]

IMPALA-13012: Lower default query_log_max_queued

Sets the query_log_max_queued default such that

  query_log_max_queued * num_columns(49) < statement_expression_limit

to avoid triggering e.g.

  AnalysisException: Exceeded the statement expression limit (25)
  Statement has 370039 expressions.

Also increases statement_expression_limit for insertion to avoid an
error if query_log_max_queued is changed.

Logs time taken to write to the queries table for help with debugging
and adds histogram "impala-server.completed-queries.write-durations".

Fixes InternalServer so it uses 'default_query_options'.

Change-Id: I6535675307d88cb65ba7d908f3c692e0cf3259d7
Reviewed-on: http://gerrit.cloudera.org:8080/21351
Reviewed-by: Michael Smith 
Tested-by: Michael Smith 
Reviewed-by: Riza Suminto 


> Completed queries write fails regularly under heavy load
> 
>
> Key: IMPALA-13012
> URL: https://issues.apache.org/jira/browse/IMPALA-13012
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Critical
>
> Under heavy test load (running EE tests), Impala regularly fails to write 
> completed queries with errors like
> {code}
> W0411 19:11:07.764967 32713 workload-management.cc:435] failed to write 
> completed queries table="sys.impala_query_log" record_count="10001"
> W0411 19:11:07.764983 32713 workload-management.cc:437] AnalysisException: 
> Exceeded the statement expression limit (25)
> Statement has 370039 expressions.
> {code}
> After a few attempts, it floods logs with an error for each query that could 
> not be written
> {code}
> E0411 19:11:24.646953 32713 workload-management.cc:376] could not write 
> completed query table="sys.impala_query_log" 
> query_id="3142ceb1380b58e6:715b83d9"
> {code}
> This seems like poor default behavior. Options for addressing it:
> # Decrease the default for {{query_log_max_queued}}. Inserts are pretty 
> constant at 37 expressions per entry. I'm not sure why that isn't 49, since 
> that's the number of columns we have; maybe some fields are frequently 
> omitted. I would cap {{query_log_max_queued}} to {{statement_expression_limit 
> / number_of_columns ~ 5100}}.
> # Allow workload management to {{set statement_expression_limit}} higher 
> using a similar formula. This may be relatively safe as the expressions are 
> simple.
> # Ideally we would skip expression parsing and construct TExecRequest 
> directly, but that's a much larger effort.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13005) Query Live table not visible via metastore

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841381#comment-17841381
 ] 

ASF subversion and git services commented on IMPALA-13005:
--

Commit 73a9ef9c4c00419b5082f355d96961c5971634d6 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=73a9ef9c4 ]

IMPALA-13005: Create Query Live table in HMS

Creates the 'sys.impala_query_live' table in HMS using a similar 'CREATE
TABLE' command to 'sys.impala_query_log'. Updates frontend to identify a
System Table based on the '__IMPALA_SYSTEM_TABLE' property. Tables
improperly marked with '__IMPALA_SYSTEM_TABLE' will error when
attempting to scan them because no relevant scanner will be available.

Creating the table in HMS simplifies supporting 'SHOW CREATE TABLE' and
'DESCRIBE EXTENDED', so allows them for parity with Query Log.
Explicitly disables 'COMPUTE STATS' on system tables as it doesn't work
correctly.

Makes System Tables work with local catalog mode, fixing

  LocalCatalogException: Unknown table type for table sys.impala_query_live

Updates workload management implementation to rely more on
SystemTables.thrift definition, and adds DCHECKs to verify completeness
and ordering.

Testing:
- adds additional test cases for changes to introspection commands
- passes existing test_query_live and test_query_log suites

Change-Id: Idf302ee54a819fdee2db0ae582a5eeddffe4a5b4
Reviewed-on: http://gerrit.cloudera.org:8080/21302
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Query Live table not visible via metastore
> --
>
> Key: IMPALA-13005
> URL: https://issues.apache.org/jira/browse/IMPALA-13005
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.4.0
>Reporter: Michael Smith
>Assignee: Michael Smith
>Priority: Major
>
> Tools that inspect Hive Metastore to view information about tables are unable 
> to see {{sys.impala_query_live}} when {{enable_workload_mgmt}} is enabled. 
> This presents an inconsistent view depending on whether inspecting tables via 
> Impala or Hive MetaStore.
> Register {{sys.impala_query_live}} with Hive MetaStore, similar to what 
> {{sys.impala_query_log}} does, to make table listing consistent.
> It also doesn't work with local catalog mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13024) Several tests timeout waiting for admission

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841380#comment-17841380
 ] 

ASF subversion and git services commented on IMPALA-13024:
--

Commit 29e418679380591ea7b8e4bdb797ccdbb0964a3d in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=29e418679 ]

IMPALA-13024: Ignore slots if using default pool and empty group

Slot based admission should not be enabled when using default pool.
There is a bug where coordinator-only query still does slot based
admission because executor group name set to
ClusterMembershipMgr::EMPTY_GROUP_NAME ("empty group (using coordinator
only)"). This patch adds check to recognize coordinator-only query in
default pool and skip slot based admission for it.

Testing:
- Add BE test AdmissionControllerTest.CanAdmitRequestSlotsDefault.
- In test_executor_groups.py, split test_coordinator_concurrency to
  test_coordinator_concurrency_default and
  test_coordinator_concurrency_two_exec_group_cluster to show the
  behavior change.
- Pass core tests in ASAN build.

Change-Id: I0b08dea7ba0c78ac6b98c7a0b148df8fb036c4d0
Reviewed-on: http://gerrit.cloudera.org:8080/21340
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Several tests timeout waiting for admission
> ---
>
> Key: IMPALA-13024
> URL: https://issues.apache.org/jira/browse/IMPALA-13024
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Csaba Ringhofer
>Assignee: Riza Suminto
>Priority: Critical
>
> A bunch of seemingly unrelated tests failed with the following message:
> Example: 
> query_test.test_spilling.TestSpillingDebugActionDimensions.test_spilling_aggs[protocol:
>  beeswax | exec_option: {'mt_dop': 1, 'debug_action': None, 
> 'default_spillable_buffer_size': '256k'} | table_format: parquet/none] 
> {code}
> ImpalaBeeswaxException: EQuery aborted:Admission for query exceeded 
> timeout 6ms in pool default-pool. Queued reason: Not enough admission 
> control slots available on host ... . Needed 1 slots but 18/16 are already in 
> use. Additional Details: Not Applicable
> {code}
> This happened in an ASAN build. Another test also failed which may be related 
> to the cause:
> custom_cluster.test_admission_controller.TestAdmissionController.test_queue_reasons_slots
>  
> {code}
> Timeout: query 'e1410add778cd7b0:c40812b9' did not reach one of the 
> expected states [4], last known state 5
> {code}
> test_queue_reasons_slots seems to be know flaky test: IMPALA-10338



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12973) Add support for BINARY in complex types in Iceberg metadata tables

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841228#comment-17841228
 ] 

ASF subversion and git services commented on IMPALA-12973:
--

Commit 457ab9831afc95b14ccf2a1a9397a923e3b16f8d in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=457ab9831 ]

IMPALA-12973,IMPALA-11491,IMPALA-12651: Support BINARY nested in complex types 
in select list

Binary fields in complex types are currently not supported at all for
regular tables (an error is returned). For Iceberg metadata tables,
IMPALA-12899 added a temporary workaround to allow queries that contain
these fields to succeed by NULLing them out. This change adds support
for displaying them with base64 encoding for both regular and Iceberg
metadata tables.

Complex types are displayed in JSON format, so simply inserting the
bytes of the binary fields is not acceptable as it would produce invalid
JSON. Base64 is a widely used encoding that allows representing
arbitrary binary information using only a limited set of ASCII
characters.

This change also adds support for top level binary columns in Iceberg
metadata tables. However, these are not base64 encoded but are returned
in raw byte format - this is consistent with how top level binary
columns from regular (non-metadata) tables are handled.

Testing:
 - added test queries in iceberg-metadata-tables.test referencing both
   nested and top level binary fields; also updated existing queries
 - moved relevant tests (queries extracting binary fields from within
   complex types) from nested-types-scanner-basic.test to a new
   binary-in-complex-type.test file and also added a query that selects
   the containing complex types; this new test file is run from
   test_scanners.py::TestBinaryInComplexType::\
 test_binary_in_complex_type
 - moved negative tests in AnalyzerTest.TestUnsupportedTypes() to
   AnalyzeStmtsTest.TestComplexTypesInSelectList() and converted them to
   positive tests (expecting success); a negative test already in
   AnalyzeStmtsTest.TestComplexTypesInSelectList() was also converted

Change-Id: I7b1d7fa332a901f05a46e0199e13fb841d2687c2
Reviewed-on: http://gerrit.cloudera.org:8080/21269
Tested-by: Impala Public Jenkins 
Reviewed-by: Csaba Ringhofer 


> Add support for BINARY in complex types in Iceberg metadata tables
> --
>
> Key: IMPALA-12973
> URL: https://issues.apache.org/jira/browse/IMPALA-12973
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Reporter: Daniel Becker
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>
> IMPALA-12899 added a temporary workaround to allow queries that contain 
> binary fields in complex types to succeed by NULLing out these binary fields. 
> We should now implement displaying them with base64 encoding.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12899) Temporary workaround for BINARY in complex types

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841231#comment-17841231
 ] 

ASF subversion and git services commented on IMPALA-12899:
--

Commit 457ab9831afc95b14ccf2a1a9397a923e3b16f8d in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=457ab9831 ]

IMPALA-12973,IMPALA-11491,IMPALA-12651: Support BINARY nested in complex types 
in select list

Binary fields in complex types are currently not supported at all for
regular tables (an error is returned). For Iceberg metadata tables,
IMPALA-12899 added a temporary workaround to allow queries that contain
these fields to succeed by NULLing them out. This change adds support
for displaying them with base64 encoding for both regular and Iceberg
metadata tables.

Complex types are displayed in JSON format, so simply inserting the
bytes of the binary fields is not acceptable as it would produce invalid
JSON. Base64 is a widely used encoding that allows representing
arbitrary binary information using only a limited set of ASCII
characters.

This change also adds support for top level binary columns in Iceberg
metadata tables. However, these are not base64 encoded but are returned
in raw byte format - this is consistent with how top level binary
columns from regular (non-metadata) tables are handled.

Testing:
 - added test queries in iceberg-metadata-tables.test referencing both
   nested and top level binary fields; also updated existing queries
 - moved relevant tests (queries extracting binary fields from within
   complex types) from nested-types-scanner-basic.test to a new
   binary-in-complex-type.test file and also added a query that selects
   the containing complex types; this new test file is run from
   test_scanners.py::TestBinaryInComplexType::\
 test_binary_in_complex_type
 - moved negative tests in AnalyzerTest.TestUnsupportedTypes() to
   AnalyzeStmtsTest.TestComplexTypesInSelectList() and converted them to
   positive tests (expecting success); a negative test already in
   AnalyzeStmtsTest.TestComplexTypesInSelectList() was also converted

Change-Id: I7b1d7fa332a901f05a46e0199e13fb841d2687c2
Reviewed-on: http://gerrit.cloudera.org:8080/21269
Tested-by: Impala Public Jenkins 
Reviewed-by: Csaba Ringhofer 


> Temporary workaround for BINARY in complex types
> 
>
> Key: IMPALA-12899
> URL: https://issues.apache.org/jira/browse/IMPALA-12899
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Daniel Becker
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>
> The BINARY type is currently not supported inside complex types and a 
> cross-component decision is probably needed to support it (see IMPALA-11491). 
> We would like to enable EXPAND_COMPLEX_TYPES for Iceberg metadata tables 
> (IMPALA-12612), which requires that queries with BINARY inside complex types 
> don't fail. Enabling EXPAND_COMPLEX_TYPES is a more prioritised issue than 
> IMPALA-11491, so we should come up with a temporary solution, e.g. NULLing 
> BINARY values in complex types and logging a warning, or setting these BINARY 
> values to a warning string.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11491) Support BINARY nested in complex types in select list

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841229#comment-17841229
 ] 

ASF subversion and git services commented on IMPALA-11491:
--

Commit 457ab9831afc95b14ccf2a1a9397a923e3b16f8d in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=457ab9831 ]

IMPALA-12973,IMPALA-11491,IMPALA-12651: Support BINARY nested in complex types 
in select list

Binary fields in complex types are currently not supported at all for
regular tables (an error is returned). For Iceberg metadata tables,
IMPALA-12899 added a temporary workaround to allow queries that contain
these fields to succeed by NULLing them out. This change adds support
for displaying them with base64 encoding for both regular and Iceberg
metadata tables.

Complex types are displayed in JSON format, so simply inserting the
bytes of the binary fields is not acceptable as it would produce invalid
JSON. Base64 is a widely used encoding that allows representing
arbitrary binary information using only a limited set of ASCII
characters.

This change also adds support for top level binary columns in Iceberg
metadata tables. However, these are not base64 encoded but are returned
in raw byte format - this is consistent with how top level binary
columns from regular (non-metadata) tables are handled.

Testing:
 - added test queries in iceberg-metadata-tables.test referencing both
   nested and top level binary fields; also updated existing queries
 - moved relevant tests (queries extracting binary fields from within
   complex types) from nested-types-scanner-basic.test to a new
   binary-in-complex-type.test file and also added a query that selects
   the containing complex types; this new test file is run from
   test_scanners.py::TestBinaryInComplexType::\
 test_binary_in_complex_type
 - moved negative tests in AnalyzerTest.TestUnsupportedTypes() to
   AnalyzeStmtsTest.TestComplexTypesInSelectList() and converted them to
   positive tests (expecting success); a negative test already in
   AnalyzeStmtsTest.TestComplexTypesInSelectList() was also converted

Change-Id: I7b1d7fa332a901f05a46e0199e13fb841d2687c2
Reviewed-on: http://gerrit.cloudera.org:8080/21269
Tested-by: Impala Public Jenkins 
Reviewed-by: Csaba Ringhofer 


> Support BINARY nested in complex types in select list
> -
>
> Key: IMPALA-11491
> URL: https://issues.apache.org/jira/browse/IMPALA-11491
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Reporter: Csaba Ringhofer
>Assignee: Daniel Becker
>Priority: Major
>
> The question is how the print BINARY types when the parent complex type is 
> printed to JSON. We can't rely on the existing Hive behavior, as it turned 
> out to be buggy:
> HIVE-26454



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12651) Add support to BINARY type Iceberg Metadata table columns

2024-04-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841230#comment-17841230
 ] 

ASF subversion and git services commented on IMPALA-12651:
--

Commit 457ab9831afc95b14ccf2a1a9397a923e3b16f8d in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=457ab9831 ]

IMPALA-12973,IMPALA-11491,IMPALA-12651: Support BINARY nested in complex types 
in select list

Binary fields in complex types are currently not supported at all for
regular tables (an error is returned). For Iceberg metadata tables,
IMPALA-12899 added a temporary workaround to allow queries that contain
these fields to succeed by NULLing them out. This change adds support
for displaying them with base64 encoding for both regular and Iceberg
metadata tables.

Complex types are displayed in JSON format, so simply inserting the
bytes of the binary fields is not acceptable as it would produce invalid
JSON. Base64 is a widely used encoding that allows representing
arbitrary binary information using only a limited set of ASCII
characters.

This change also adds support for top level binary columns in Iceberg
metadata tables. However, these are not base64 encoded but are returned
in raw byte format - this is consistent with how top level binary
columns from regular (non-metadata) tables are handled.

Testing:
 - added test queries in iceberg-metadata-tables.test referencing both
   nested and top level binary fields; also updated existing queries
 - moved relevant tests (queries extracting binary fields from within
   complex types) from nested-types-scanner-basic.test to a new
   binary-in-complex-type.test file and also added a query that selects
   the containing complex types; this new test file is run from
   test_scanners.py::TestBinaryInComplexType::\
 test_binary_in_complex_type
 - moved negative tests in AnalyzerTest.TestUnsupportedTypes() to
   AnalyzeStmtsTest.TestComplexTypesInSelectList() and converted them to
   positive tests (expecting success); a negative test already in
   AnalyzeStmtsTest.TestComplexTypesInSelectList() was also converted

Change-Id: I7b1d7fa332a901f05a46e0199e13fb841d2687c2
Reviewed-on: http://gerrit.cloudera.org:8080/21269
Tested-by: Impala Public Jenkins 
Reviewed-by: Csaba Ringhofer 


> Add support to BINARY type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12651
> URL: https://issues.apache.org/jira/browse/IMPALA-12651
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>
> Impala should be able to read BINARY type columns from Iceberg metadata 
> tables as strings, additionally this should be allowed when reading these 
> types from complex types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13011) Need to remove awkward Authorization instantiation.

2024-04-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840958#comment-17840958
 ] 

ASF subversion and git services commented on IMPALA-13011:
--

Commit b39cd79ae84c415e0aebec2c2b4d7690d2a0cc7a in impala's branch 
refs/heads/master from Steve Carlin
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b39cd79ae ]

IMPALA-12872: Use Calcite for optimization - part 1: simple queries

This is the first commit to use the Calcite library to parse,
analyze, and optimize queries.

The hook for the planner is through an override of the JniFrontend. The
CalciteJniFrontend class is the driver that walks through each of the
Calcite steps which are as follows:

CalciteQueryParser: Takes the string query and outputs an AST in the
form of Calcite's SqlNode object.

CalciteMetadataHandler: Iterate through the SqlNode from the previous step
and make sure all essential table metadata is retrieved from catalogd.

CalciteValidator: Validate the SqlNode tree, akin to the Impala Analyzer.

CalciteRelNodeConverter: Change the AST into a logical plan. In this first
commit, the only logical nodes used are LogicalTableScan and LogicalProject.
The LogicalTableScan will serve as the node that reads from an Hdfs Table and
the LogicalProject will only project out the used columns in the query. In
later versions, the LogicalProject will also handle function changes.

CalciteOptimizer: This step is to optimize the query. In this cut, it will be
a nop, but in later versions, it will perform logical optimizations via
Calcite's rule mechanism.

CalcitePhysPlanCreator: Converts the Calcite RelNode logical tree into
Impala's PlanNode physical tree

ExecRequestCreator: Implement the existing Impala steps that turn a Single
Node Plan into a Distributed Plan. It will also create the TExecRequest object
needed by the runtime server.

Only some very basic queries will work with this commit. These include:
select * from tbl <-- only needs the LogicalTableScan
select c1 from tbl <-- Also uses the LogicalProject

In the CalciteJniFrontend, there is some basic checks to make sure only
select statements will get processed. Any non-query statement will revert
back to the current Impala planner.

In this iteration, any queries besides the minimal ones listed above will
result in a caught exception which will then be run through the current
Impala planner. The tests that do work can be found in calcite.test and
run through the custom cluster test test_experimental_planner.py

This iteration should support all types with the exception of complex
types. Calcite does not have a STRING type, so the string type is
represented as VARCHAR(MAXINT) similar to how Hive represents their
STRING type.

The ImpalaTypeConverter file is used to convert the Impala Type object
to corresponding Calcite objects.

Authorization is not yet working with this current commit. A Jira has been
filed (IMPALA-13011) to deal with this.

Change-Id: I453fd75b7b705f4d7de1ed73c3e24cafad0b8c98
Reviewed-on: http://gerrit.cloudera.org:8080/21109
Tested-by: Impala Public Jenkins 
Reviewed-by: Joe McDonnell 


> Need to remove awkward Authorization instantiation.
> ---
>
> Key: IMPALA-13011
> URL: https://issues.apache.org/jira/browse/IMPALA-13011
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Steve Carlin
>Priority: Major
>
> There is a reference to the Authorization instance in CalcitePhysPlanCreator 
> in order to instantiate the Analyzer object.
> Authorization needs to happen earlier.  This should be refactored so it is 
> not referenced in this part of the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12872) Support basic queries using Calcite on the frontend

2024-04-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840957#comment-17840957
 ] 

ASF subversion and git services commented on IMPALA-12872:
--

Commit b39cd79ae84c415e0aebec2c2b4d7690d2a0cc7a in impala's branch 
refs/heads/master from Steve Carlin
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b39cd79ae ]

IMPALA-12872: Use Calcite for optimization - part 1: simple queries

This is the first commit to use the Calcite library to parse,
analyze, and optimize queries.

The hook for the planner is through an override of the JniFrontend. The
CalciteJniFrontend class is the driver that walks through each of the
Calcite steps which are as follows:

CalciteQueryParser: Takes the string query and outputs an AST in the
form of Calcite's SqlNode object.

CalciteMetadataHandler: Iterate through the SqlNode from the previous step
and make sure all essential table metadata is retrieved from catalogd.

CalciteValidator: Validate the SqlNode tree, akin to the Impala Analyzer.

CalciteRelNodeConverter: Change the AST into a logical plan. In this first
commit, the only logical nodes used are LogicalTableScan and LogicalProject.
The LogicalTableScan will serve as the node that reads from an Hdfs Table and
the LogicalProject will only project out the used columns in the query. In
later versions, the LogicalProject will also handle function changes.

CalciteOptimizer: This step is to optimize the query. In this cut, it will be
a nop, but in later versions, it will perform logical optimizations via
Calcite's rule mechanism.

CalcitePhysPlanCreator: Converts the Calcite RelNode logical tree into
Impala's PlanNode physical tree

ExecRequestCreator: Implement the existing Impala steps that turn a Single
Node Plan into a Distributed Plan. It will also create the TExecRequest object
needed by the runtime server.

Only some very basic queries will work with this commit. These include:
select * from tbl <-- only needs the LogicalTableScan
select c1 from tbl <-- Also uses the LogicalProject

In the CalciteJniFrontend, there is some basic checks to make sure only
select statements will get processed. Any non-query statement will revert
back to the current Impala planner.

In this iteration, any queries besides the minimal ones listed above will
result in a caught exception which will then be run through the current
Impala planner. The tests that do work can be found in calcite.test and
run through the custom cluster test test_experimental_planner.py

This iteration should support all types with the exception of complex
types. Calcite does not have a STRING type, so the string type is
represented as VARCHAR(MAXINT) similar to how Hive represents their
STRING type.

The ImpalaTypeConverter file is used to convert the Impala Type object
to corresponding Calcite objects.

Authorization is not yet working with this current commit. A Jira has been
filed (IMPALA-13011) to deal with this.

Change-Id: I453fd75b7b705f4d7de1ed73c3e24cafad0b8c98
Reviewed-on: http://gerrit.cloudera.org:8080/21109
Tested-by: Impala Public Jenkins 
Reviewed-by: Joe McDonnell 


> Support basic queries using Calcite on the frontend
> ---
>
> Key: IMPALA-12872
> URL: https://issues.apache.org/jira/browse/IMPALA-12872
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: fe
>Reporter: Steve Carlin
>Priority: Major
>
> First commit for the Calcite planner.  The idea is to make minimal changes to 
> the current frontend code and keep this code in a separate jar.
>  
> The first commit will support a basic select statement without any filters or 
> functions (e.g. select * from tbl and select c1 from tbl)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12035) Impala accepts very big numbers but fails to store them correctly

2024-04-25 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840945#comment-17840945
 ] 

ASF subversion and git services commented on IMPALA-12035:
--

Commit 4f033c7750e711c498fb80ff851d4dae0eed55ce in impala's branch 
refs/heads/master from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=4f033c775 ]

IMPALA-12950: Improve error message in case of out-of-range numeric conversions

IMPALA-12035 introduced checks for numeric conversions that are unsafe
and can fail (if the target type cannot store the value, the behaviour
is undefined):
 - from floating-point types to integer types
 - from double to float

However, it can be difficult to trace which part of the query caused
this based on the error message. This change adds the source type, the
destination type and the value to be converted to the error message.
Unfortunately, at this point in the BE, the original SQL is not
available, so we cannot reference that.

Testing:
 - extended existing tests in expr-test.cc.

Change-Id: Ieeed52e25f155818c35c11a8a6821708476ffb32
Reviewed-on: http://gerrit.cloudera.org:8080/21331
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Impala accepts very big numbers but fails to store them correctly
> -
>
> Key: IMPALA-12035
> URL: https://issues.apache.org/jira/browse/IMPALA-12035
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Bakai Ádám
>Assignee: Daniel Becker
>Priority: Major
> Fix For: Impala 4.3.0
>
>
> I tried  to insert rows very big bigints, and it worked as expected with big 
> integers (error message, no new row stored), but it didn’t work as expected 
> with ridiculously big integers( no error message, stored incorrect value). 
> Here are the commands used:
> {code:java}
> drop TABLE my_first_table2;
> CREATE TABLE my_first_table2
> (
>   id BIGINT,
>   name STRING,
>   PRIMARY KEY(id)
> );
> INSERT INTO my_first_table2 VALUES (cast(9 as BIGINT), "sarah");
> -- this works just fine as expected
> INSERT INTO my_first_table2 VALUES (cast(99 as BIGINT), 
> "sarah");
> -- ERROR: UDF ERROR: Decimal expression overflowed which is expected since it 
> is over bigint max value (source: 
> https://impala.apache.org/docs/build/plain-html/topics/impala_bigint.html#:~:text=Range%3A%20%2D9223372036854775808%20..,9223372036854775807.
>  )
> INSERT INTO my_first_table2 VALUES 
> (cast(99
>  as BIGINT), "sarah");
> -- this succeeds and doesn't throw the same error as the previous command 
> which is concerning
> select * from my_first_table2;
> -- there are two rows in the table, and the id is incorrect in one of them  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13015) Dataload fails due to concurrency issue with test.jceks

2024-04-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840114#comment-17840114
 ] 

ASF subversion and git services commented on IMPALA-13015:
--

Commit f620e5d5c0bbdb0fd97bac31c7b7439cd13c6d08 in impala's branch 
refs/heads/master from Abhishek Rawat
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f620e5d5c ]

IMPALA-13015: Dataload fails due to concurrency issue with test.jceks

Move 'hadoop credential' command used for creating test.jceks to
testdata/bin/create-load-data.sh. Earlier it was in bin/load-data.py
which is called in parallel and was causing failures due to race
conditions.

Testing:
- Ran JniFrontendTest#testGetSecretFromKeyStore after data loading and
test ran clean.

Change-Id: I7fbeffc19f2b78c19fee9acf7f96466c8f4f9bcd
Reviewed-on: http://gerrit.cloudera.org:8080/21346
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Dataload fails due to concurrency issue with test.jceks
> ---
>
> Key: IMPALA-13015
> URL: https://issues.apache.org/jira/browse/IMPALA-13015
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 4.4.0
>Reporter: Joe McDonnell
>Assignee: Abhishek Rawat
>Priority: Major
>  Labels: flaky
>
> When doing dataload locally, it fails with this error:
> {noformat}
> Traceback (most recent call last):
>   File "/home/joemcdonnell/upstream/Impala/bin/load-data.py", line 523, in 
> 
>     if __name__ == "__main__": main()
>   File "/home/joemcdonnell/upstream/Impala/bin/load-data.py", line 322, in 
> main
>     os.remove(jceks_path)
> OSError: [Errno 2] No such file or directory: 
> '/home/joemcdonnell/upstream/Impala/testdata/jceks/test.jceks'
> Background task Loading functional-query data (pid 501094) failed.
> {noformat}
> testdata/bin/create-load-data.sh calls bin/load-data.py for functional, 
> TPC-H, and TPC-DS in parallel, so this logic has race conditions:
> {noformat}
>   jceks_path = TESTDATA_JCEKS_DIR + "/test.jceks"
>   if os.path.exists(jceks_path):
>     os.remove(jceks_path){noformat}
> I don't see a specific reason for this to be in bin/load-data.py. It should 
> be moved somewhere else that doesn't run in parallel. One possible location 
> is to add a step in testdata/bin/create-load-data.sh
> This was introduced in 
> [https://github.com/apache/impala/commit/9837637d9342a49288a13a421d4e749818da1432]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12777) Fix tpcds/tpcds-q66.test

2024-04-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840030#comment-17840030
 ] 

ASF subversion and git services commented on IMPALA-12777:
--

Commit 742a46bc42f581e807e2f7233e9765ca158957dc in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=742a46bc4 ]

IMPALA-12777: Fix tpcds/tpcds-q66.test

PlannerTest/tpcds/tpcds-q66.test was mistakenly a copy of
PlannerTest/tpcds/tpcds-q61.test with different predicate values. This
patch replace the wrong test file with correct TPC-DS Q66 query.

Testing:
- Pass FE test TpcdsPlannerTest#testQ66.

Change-Id: I5b886f5dc1da213d25f33bd7b01dacca53eaef1b
Reviewed-on: http://gerrit.cloudera.org:8080/21344
Reviewed-by: Wenzhe Zhou 
Tested-by: Impala Public Jenkins 


> Fix tpcds/tpcds-q66.test
> 
>
> Key: IMPALA-12777
> URL: https://issues.apache.org/jira/browse/IMPALA-12777
> Project: IMPALA
>  Issue Type: Test
>  Components: Frontend
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>  Labels: ramp-up
>
> tpcds/tpcds-q66.test appears to be mistakenly a copy of tpcds/tpcds-q61.test 
> with different predicate values.
> [https://github.com/apache/impala/blob/00247c2/testdata/workloads/functional-planner/queries/PlannerTest/tpcds/tpcds-q66.test]
> [https://github.com/apache/impala/blob/00247c2/testdata/workloads/functional-planner/queries/PlannerTest/tpcds/tpcds-q61.test]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13002) Iceberg V2 tables with Avro delete files aren't read properly

2024-04-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840031#comment-17840031
 ] 

ASF subversion and git services commented on IMPALA-13002:
--

Commit 99ce967ba60666adff5dd74fd38a06fee7f2c521 in impala's branch 
refs/heads/branch-4.4.0 from Zoltan Borok-Nagy
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=99ce967ba ]

IMPALA-13002: Iceberg V2 tables with Avro delete files aren't read properly

If the Iceberg table has Avro delete files (e.g. by setting
'write.delete.format.default'='avro') then Impala won't be able to read
the contents of the delete files properly. It is because the avro
schema is not set properly for the virtual delete table.

Testing:
 * added e2e tests with position delete files of all kinds

Change-Id: Iff13198991caf32c51cd9e0ace4454fd00216cf6
Reviewed-on: http://gerrit.cloudera.org:8080/21301
Tested-by: Impala Public Jenkins 
Reviewed-by: Daniel Becker 
Reviewed-by: Gabor Kaszab 


> Iceberg V2 tables with Avro delete files aren't read properly
> -
>
> Key: IMPALA-13002
> URL: https://issues.apache.org/jira/browse/IMPALA-13002
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Zoltán Borók-Nagy
>Assignee: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> If the Iceberg table has Avro delete files (e.g. by setting 
> 'write.delete.format.default'='avro') then Impala won't be able to read the 
> contents of the delete files properly.
> Therefore delete records won't be applied to the Iceberg table during reads. 
> The new Iceberg V2 operator will raise errors in this case, but if it is 
> disabled via 'set disable_optimized_iceberg_v2_read=true' then Impala won't 
> be able to detect invalid delete records.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12777) Fix tpcds/tpcds-q66.test

2024-04-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840022#comment-17840022
 ] 

ASF subversion and git services commented on IMPALA-12777:
--

Commit 850709cece2ce6c188133551252acc96742157d7 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=850709cec ]

IMPALA-12777: Fix tpcds/tpcds-q66.test

PlannerTest/tpcds/tpcds-q66.test was mistakenly a copy of
PlannerTest/tpcds/tpcds-q61.test with different predicate values. This
patch replace the wrong test file with correct TPC-DS Q66 query.

Testing:
- Pass FE test TpcdsPlannerTest#testQ66.

Change-Id: I5b886f5dc1da213d25f33bd7b01dacca53eaef1b
Reviewed-on: http://gerrit.cloudera.org:8080/21344
Reviewed-by: Wenzhe Zhou 
Tested-by: Impala Public Jenkins 


> Fix tpcds/tpcds-q66.test
> 
>
> Key: IMPALA-12777
> URL: https://issues.apache.org/jira/browse/IMPALA-12777
> Project: IMPALA
>  Issue Type: Test
>  Components: Frontend
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>  Labels: ramp-up
>
> tpcds/tpcds-q66.test appears to be mistakenly a copy of tpcds/tpcds-q61.test 
> with different predicate values.
> [https://github.com/apache/impala/blob/00247c2/testdata/workloads/functional-planner/queries/PlannerTest/tpcds/tpcds-q66.test]
> [https://github.com/apache/impala/blob/00247c2/testdata/workloads/functional-planner/queries/PlannerTest/tpcds/tpcds-q61.test]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13002) Iceberg V2 tables with Avro delete files aren't read properly

2024-04-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840023#comment-17840023
 ] 

ASF subversion and git services commented on IMPALA-13002:
--

Commit ec73b542c5900b59c2504ee9a59b38822dd7d9f0 in impala's branch 
refs/heads/master from Zoltan Borok-Nagy
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ec73b542c ]

IMPALA-13002: Iceberg V2 tables with Avro delete files aren't read properly

If the Iceberg table has Avro delete files (e.g. by setting
'write.delete.format.default'='avro') then Impala won't be able to read
the contents of the delete files properly. It is because the avro
schema is not set properly for the virtual delete table.

Testing:
 * added e2e tests with position delete files of all kinds

Change-Id: Iff13198991caf32c51cd9e0ace4454fd00216cf6
Reviewed-on: http://gerrit.cloudera.org:8080/21301
Tested-by: Impala Public Jenkins 
Reviewed-by: Daniel Becker 
Reviewed-by: Gabor Kaszab 


> Iceberg V2 tables with Avro delete files aren't read properly
> -
>
> Key: IMPALA-13002
> URL: https://issues.apache.org/jira/browse/IMPALA-13002
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Reporter: Zoltán Borók-Nagy
>Assignee: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> If the Iceberg table has Avro delete files (e.g. by setting 
> 'write.delete.format.default'='avro') then Impala won't be able to read the 
> contents of the delete files properly.
> Therefore delete records won't be applied to the Iceberg table during reads. 
> The new Iceberg V2 operator will raise errors in this case, but if it is 
> disabled via 'set disable_optimized_iceberg_v2_read=true' then Impala won't 
> be able to detect invalid delete records.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12938) test_no_inaccessible_objects failed in JDK11 build

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839696#comment-17839696
 ] 

ASF subversion and git services commented on IMPALA-12938:
--

Commit b754a9494da4da56bf215276c206dba3b34cf539 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b754a9494 ]

IMPALA-12938: add-opens for platform.cgroupv1

Adds '--add-opens=jdk.internal.platform.cgroupv1' for Java 11 with
ehcache, covering Impala daemons and frontend tests. Fixes
InaccessibleObjectException detected by test_banned_log_messages.py.

Change-Id: I312ae987c17c6f06e1ffe15e943b1865feef6b82
Reviewed-on: http://gerrit.cloudera.org:8080/21334
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_no_inaccessible_objects failed in JDK11 build
> --
>
> Key: IMPALA-12938
> URL: https://issues.apache.org/jira/browse/IMPALA-12938
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Assignee: Michael Smith
>Priority: Major
>  Labels: broken-build
> Fix For: Impala 4.4.0
>
>
> h3. Error Message
> {noformat}
> AssertionError: 
> /data/jenkins/workspace/impala-asf-master-core-jdk11/repos/Impala/logs/custom_cluster_tests/impalad.impala-ec2-centos79-m6i-4xlarge-xldisk-197f.vpc.cloudera.com.jenkins.log.INFO.20240323-184351.16035
>  contains 'InaccessibleObjectException' assert 0 == 1{noformat}
> h3. Stacktrace
> {noformat}
> verifiers/test_banned_log_messages.py:40: in test_no_inaccessible_objects
> self.assert_message_absent('InaccessibleObjectException')
> verifiers/test_banned_log_messages.py:36: in assert_message_absent
> assert returncode == 1, "%s contains '%s'" % (log_file_path, message)
> E   AssertionError: 
> /data/jenkins/workspace/impala-asf-master-core-jdk11/repos/Impala/logs/custom_cluster_tests/impalad.impala-ec2-centos79-m6i-4xlarge-xldisk-197f.vpc.cloudera.com.jenkins.log.INFO.20240323-184351.16035
>  contains 'InaccessibleObjectException'
> E   assert 0 == 1{noformat}
> h3. Standard Output
> {noformat}
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1MemorySubSystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.memory accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpu accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpuacct accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpuset accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.blkio accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.pids accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13004) heap-use-after-free error in ExprTest AiFunctionsTest

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839692#comment-17839692
 ] 

ASF subversion and git services commented on IMPALA-13004:
--

Commit a4a755d173822d3e123a871ffad6203ca98ff9f5 in impala's branch 
refs/heads/branch-4.4.0 from Yida Wu
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=a4a755d17 ]

IMPALA-13004: Fix heap-use-after-free error in ExprTest AiFunctionsTest

The issue is that the code previously used a std::string_view to
hold the data which is actually returned by rapidjson::Document.
However, the rapidjson::Document object gets destroyed after
creating the std::string_view. This meant the std::string_view
referenced memory that was no longer valid, leading to a
heap-use-after-free error.

This patch fixes this issue by modifying the function to
return a std::string instead of a std::string_view. When the
function returns a string, it creates a copy of the
data from rapidjson::Document. This ensures the returned
string has its own memory allocation and doesn't rely on
the destroyed rapidjson::Document.

Tests:
Reran the asan build and passed.

Change-Id: I3bb9dcf9d72cce7ad37d5bc25821cf6ee55a8ab5
Reviewed-on: http://gerrit.cloudera.org:8080/21315
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> heap-use-after-free error in ExprTest AiFunctionsTest
> -
>
> Key: IMPALA-13004
> URL: https://issues.apache.org/jira/browse/IMPALA-13004
> Project: IMPALA
>  Issue Type: Bug
>  Components: be
>Affects Versions: Impala 4.4.0
>Reporter: Andrew Sherman
>Assignee: Yida Wu
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> In an ASAN test, expr-test fails:
> {code}
> ==1601==ERROR: AddressSanitizer: heap-use-after-free on address 
> 0x63100152c826 at pc 0x0298f841 bp 0x7ffc91fff460 sp 0x7ffc91fff458
> READ of size 2 at 0x63100152c826 thread T0
> #0 0x298f840 in rapidjson::GenericValue, 
> rapidjson::MemoryPoolAllocator >::GetType() const 
> /data/jenkins/workspace/impala-cdw-master-core-asan/Impala-Toolchain/toolchain-packages-gcc10.4.0/rapidjson-1.1.0/include/rapidjson/document.h:936:62
> #1 0x298d852 in bool rapidjson::GenericValue, 
> rapidjson::MemoryPoolAllocator 
> >::Accept,
>  rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u> 
> >(rapidjson::Writer, 
> rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u>&) const 
> /data/jenkins/workspace/impala-cdw-master-core-asan/Impala-Toolchain/toolchain-packages-gcc10.4.0/rapidjson-1.1.0/include/rapidjson/document.h:1769:16
> #2 0x298d8d0 in bool rapidjson::GenericValue, 
> rapidjson::MemoryPoolAllocator 
> >::Accept,
>  rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u> 
> >(rapidjson::Writer, 
> rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u>&) const 
> /data/jenkins/workspace/impala-cdw-master-core-asan/Impala-Toolchain/toolchain-packages-gcc10.4.0/rapidjson-1.1.0/include/rapidjson/document.h:1790:21
> #3 0x298d9e8 in bool rapidjson::GenericValue, 
> rapidjson::MemoryPoolAllocator 
> >::Accept,
>  rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u> 
> >(rapidjson::Writer, 
> rapidjson::CrtAllocator>, rapidjson::UTF8, rapidjson::UTF8, 
> rapidjson::CrtAllocator, 0u>&) const 
> /data/jenkins/workspace/impala-cdw-master-core-asan/Impala-Toolchain/toolchain-packages-gcc10.4.0/rapidjson-1.1.0/include/rapidjson/document.h:1781:21
> #4 0x28a0707 in impala_udf::StringVal 
> impala::AiFunctions::AiGenerateTextInternal(impala_udf::FunctionContext*,
>  impala_udf::StringVal const&, impala_udf::StringVal const&, 
> impala_udf::StringVal const&, impala_udf::StringVal const&, 
> impala_udf::StringVal const&, bool) 
> /data/jenkins/workspace/impala-cdw-master-core-asan/repos/Impala/be/src/exprs/ai-functions.inline.h:140:11
> #5 0x286087e in impala::ExprTest_AiFunctionsTest_Test::TestBody() 
> /data/jenkins/workspace/impala-cdw-master-core-asan/repos/Impala/be/src/exprs/expr-test.cc:11254:12
> #6 0x8aeaa4c in void 
> testing::internal::HandleExceptionsInMethodIfSupported void>(testing::Test*, void (testing::Test::*)(), char const*) 
> (/data0/jenkins/workspace/impala-cdw-master-core-asan/repos/Impala/be/build/debug/service/unifiedbetests+0x8aeaa4c)
> #7 0x8ae3ec4 in testing::Test::Run() 
> (/data0/jenkins/workspace/impala-cdw-master-core-asan/repos/Impala/be/build/debug/service/unifiedbetests+0x8ae3ec4)
> #8 0x8ae4007 in testing::TestInfo::Run() 
> (/data0/jenkins/workspace/impala-cdw-master-core-asan/repos/Impala/be/build/debug/service/unifiedbetests+0x8ae4007)
> #9 0x8ae40e4 in 

[jira] [Commented] (IMPALA-13016) Fix ambiguous row_regex that check for no-existence

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839695#comment-17839695
 ] 

ASF subversion and git services commented on IMPALA-13016:
--

Commit af31854a265e54111e6caf870ca28753038d5bde in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=af31854a2 ]

IMPALA-13016: Fix ambiguous row_regex that check for no-existence

There are few row_regex patterns used in EE test files that are
ambiguous on whether a pattern does not exist in all parts of the
results/runtime profile or at least one row does not have that pattern.
These were caught by grepping the following pattern:

$ git grep -n "row_regex: (?\!"

This patch replaces them with either with !row_regex or VERIFY_IS_NOT_IN
comment.

Testing:
- Run and pass modified tests.

Change-Id: Ic81de34bf997dfaf1c199b1fe1b05346b55ff4da
Reviewed-on: http://gerrit.cloudera.org:8080/21333
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Fix ambiguous row_regex that check for no-existence
> ---
>
> Key: IMPALA-13016
> URL: https://issues.apache.org/jira/browse/IMPALA-13016
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Minor
> Fix For: Impala 4.4.0
>
>
> There are few row_regex pattern used in EE test files that is ambiguous on 
> whether a parttern not exist in all parts of results/runtime filter or at 
> least one row does not have that pattern:
> {code:java}
> $ git grep -n "row_regex: (?\!"
> testdata/workloads/functional-query/queries/QueryTest/acid-clear-statsaccurate.test:34:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/acid-truncate.test:47:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/clear-statsaccurate.test:28:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/iceberg-v2-directed-mode.test:14:row_regex:
>  (?!.*F03:JOIN BUILD.*) {code}
> They should be replaced either with !row_regex or VERIFY_IS_NOT_IN comment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12963) Testcase test_query_log_table_lower_max_sql_plan failed in ubsan builds

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839683#comment-17839683
 ] 

ASF subversion and git services commented on IMPALA-12963:
--

Commit b342710f574a0b49a123f774137cf5298844fbac in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=b342710f5 ]

IMPALA-12963: Return parent PID when children spawned

Returns the original PID for a command rather than any children that may
be active. This happens during graceful shutdown in UBSAN tests. Also
updates 'kill' to use the version of 'get_pid' that logs details to help
with debugging.

Moves try block in test_query_log.py to after client2 has been
initialized. Removes 'drop table' on unique_database, since test suite
already handles cleanup.

Change-Id: I214e79507c717340863d27f68f6ea54c169e4090
Reviewed-on: http://gerrit.cloudera.org:8080/21278
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Testcase test_query_log_table_lower_max_sql_plan failed in ubsan builds
> ---
>
> Key: IMPALA-12963
> URL: https://issues.apache.org/jira/browse/IMPALA-12963
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Yida Wu
>Assignee: Michael Smith
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> Testcase test_query_log_table_lower_max_sql_plan failed in ubsan builds with 
> following messages:
> *Error Message*
> {code:java}
> test setup failure
> {code}
> *Stacktrace*
> {code:java}
> common/custom_cluster_test_suite.py:226: in teardown_method
> impalad.wait_for_exit()
> common/impala_cluster.py:471: in wait_for_exit
> while self.__get_pid() is not None:
> common/impala_cluster.py:414: in __get_pid
> assert len(pids) < 2, "Expected single pid but found %s" % ", 
> ".join(map(str, pids))
> E   AssertionError: Expected single pid but found 892, 31942
> {code}
> *Standard Error*
> {code:java}
> -- 2024-03-28 04:21:44,105 INFO MainThread: Starting cluster with 
> command: 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/bin/start-impala-cluster.py
>  '--state_store_args=--statestore_update_frequency_ms=50 
> --statestore_priority_update_frequency_ms=50 
> --statestore_heartbeat_frequency_ms=50' --cluster_size=3 --num_coordinators=3 
> --log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests
>  --log_level=1 '--impalad_args=--enable_workload_mgmt 
> --query_log_write_interval_s=1 --cluster_id=test_max_select 
> --shutdown_grace_period_s=10 --shutdown_deadline_s=60 
> --query_log_max_sql_length=2000 --query_log_max_plan_length=2000 ' 
> '--state_store_args=None ' '--catalogd_args=--enable_workload_mgmt ' 
> --impalad_args=--default_query_options=
> 04:21:44 MainThread: Found 0 impalad/0 statestored/0 catalogd process(es)
> 04:21:44 MainThread: Starting State Store logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests/statestored.INFO
> 04:21:44 MainThread: Starting Catalog Service logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests/catalogd.INFO
> 04:21:44 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests/impalad.INFO
> 04:21:44 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests/impalad_node1.INFO
> 04:21:44 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan/repos/Impala/logs/custom_cluster_tests/impalad_node2.INFO
> 04:21:47 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
> 04:21:47 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
> 04:21:47 MainThread: Getting num_known_live_backends from 
> impala-ec2-centos79-m6i-4xlarge-ondemand-174b.vpc.cloudera.com:25000
> 04:21:47 MainThread: Waiting for num_known_live_backends=3. Current value: 0
> 04:21:48 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
> 04:21:48 MainThread: Getting num_known_live_backends from 
> impala-ec2-centos79-m6i-4xlarge-ondemand-174b.vpc.cloudera.com:25000
> 04:21:48 MainThread: Waiting for num_known_live_backends=3. Current value: 0
> 04:21:49 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
> 04:21:49 MainThread: Getting num_known_live_backends from 
> impala-ec2-centos79-m6i-4xlarge-ondemand-174b.vpc.cloudera.com:25000
> 04:21:49 MainThread: Waiting for num_known_live_backends=3. Current value: 2
> 04:21:50 MainThread: 

[jira] [Commented] (IMPALA-12679) test_rows_sent_counters failed to match RPCCount

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839686#comment-17839686
 ] 

ASF subversion and git services commented on IMPALA-12679:
--

Commit 9190a28870c727f9c260cc245eea1c153ee11ff2 in impala's branch 
refs/heads/branch-4.4.0 from Kurt Deschler
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9190a2887 ]

IMPALA-12679: Improve test_rows_sent_counters assert

This patch changes the assert for failed test test_rows_sent_counters so
that the actual RPC count is displayed in the assert output. The root
cause of the failure will be addressed once sufficient data is collected
with the new output.

Testing:
  Ran test_rows_sent_counters with modified expected RPC count range to
simulate failure.

Change-Id: Ic6b48cf4039028e749c914ee60b88f04833a0069
Reviewed-on: http://gerrit.cloudera.org:8080/21310
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_rows_sent_counters failed to match RPCCount
> 
>
> Key: IMPALA-12679
> URL: https://issues.apache.org/jira/browse/IMPALA-12679
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Michael Smith
>Assignee: Kurt Deschler
>Priority: Major
>
> {code}
> query_test.test_fetch.TestFetch.test_rows_sent_counters[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]
> {code}
> failed with
> {code}
> query_test/test_fetch.py:69: in test_rows_sent_counters
> assert re.search("RPCCount: [5-9]", runtime_profile)
> E   assert None
> E+  where None = ('RPCCount: [5-9]', 
> 'Query (id=c8476e5c065757bf:b4367698):\n  DEBUG MODE WARNING: Query 
> profile created while running a DEBUG buil...: 0.000ns\n - 
> WriteIoBytes: 0\n - WriteIoOps: 0 (0)\n - 
> WriteIoWaitTime: 0.000ns\n')
> E+where  = re.search
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12933) Catalogd should set eventTypeSkipList when fetching specifit events for a table

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839699#comment-17839699
 ] 

ASF subversion and git services commented on IMPALA-12933:
--

Commit 0767d656ef00a381441fdcc3ebb3f146fb0d179c in impala's branch 
refs/heads/branch-4.4.0 from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=0767d656e ]

IMPALA-12933: Avoid fetching unneccessary events of unwanted types

There are several places where catalogd will fetch all events of a
specific type on a table. E.g. in TableLoader#load(), if the table has
an old createEventId, catalogd will fetch all CREATE_TABLE events after
that createEventId on the table.

Fetching the list of events is expensive since the filtering is done on
client side, i.e. catalogd fetches all events and filter them locally
based on the event type and table name. This could take hours if there
are lots of events (e.g 1M) in HMS.

This patch sets the eventTypeSkipList with the complement set of the
wanted type. So the get_next_notification RPC can filter out some events
on HMS side. To avoid bringing too much computation overhead to HMS's
underlying RDBMS in evaluating predicates of EVENT_TYPE != 'xxx', rare
event types (e.g. DROP_ISCHEMA) are not added in the list. A new flag,
common_hms_event_types, is added to specify the common HMS event types.

Once HIVE-28146 is resolved, we can set the wanted types directly in the
HMS RPC and this approach can be simplified.

UPDATE_TBL_COL_STAT_EVENT, UPDATE_PART_COL_STAT_EVENT are the most
common unused events for Impala. They are also added to the default skip
list. A new flag, default_skipped_hms_event_types, is added to configure
this list.

This patch also fixes an issue that events of the non-default catalog
are not filtered out.

In a local perf test, I generated 100K RELOAD events after creating a
table in Hive. Then use the table in Impala to trigger metadata loading
on it which will fetch the latest CREATE_TABLE event by polling all
events after the last known CREATE_TABLE event. Before this patch,
fetching the events takes 1s779ms. Now it takes only 395.377ms. Note
that in prod env, the event messages are usually larger, we could have
a larger speedup.

Tests:
 - Added an FE test
 - Ran CORE tests

Change-Id: Ieabe714328aa2cc605cb62b85ae8aa4bd537dbe9
Reviewed-on: http://gerrit.cloudera.org:8080/21186
Reviewed-by: Csaba Ringhofer 
Tested-by: Impala Public Jenkins 


> Catalogd should set eventTypeSkipList when fetching specifit events for a 
> table
> ---
>
> Key: IMPALA-12933
> URL: https://issues.apache.org/jira/browse/IMPALA-12933
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> There are several places that catalogd will fetch all events of a specifit 
> type on a table. E.g. in TableLoader#load(), if the table has an old 
> createEventId, catalogd will fetch all CREATE_TABLE events after that 
> createEventId on the table.
> Fetching the list of events is expensive since the filtering is done on 
> client side, i.e. catalogd fetch all events and filter them locally based on 
> the event type and table name:
> [https://github.com/apache/impala/blob/14e3ed4f97292499b2e6ee8d5a756dc648d9/fe/src/main/java/org/apache/impala/catalog/TableLoader.java#L98-L102]
> [https://github.com/apache/impala/blob/b7ddbcad0dd6accb559a3f391a897a8c442d1728/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java#L336]
> This could take hours if there are lots of events (e.g 1M) in HMS. In fact, 
> NotificationEventRequest can specify an eventTypeSkipList. Catalogd can do 
> the filtering of event type in HMS side. On higher Hive versions that have 
> HIVE-27499, catalogd can also specify the table name in the request 
> (IMPALA-12607).
> This Jira focus on specifying the eventTypeSkipList when fetching events of a 
> particular type on a table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13003) Server exits early failing to create impala_query_log with AlreadyExistsException

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839687#comment-17839687
 ] 

ASF subversion and git services commented on IMPALA-13003:
--

Commit 1a90388b19500289fbb8c3765aee9a8882ae3c04 in impala's branch 
refs/heads/branch-4.4.0 from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=1a90388b1 ]

IMPALA-13003: Handle Iceberg AlreadyExistsException

When multiple coordinators attempt to create the same table concurrently
with "if not exists", we still see

  AlreadyExistsException: Table was created concurrently: my_iceberg_tbl

Iceberg throws its own version of AlreadyExistsException, but we avoid
most code paths that would throw it because we first check HMS to see if
the table exists before trying to create it.

Updates createIcebergTable to handle Iceberg's AlreadyExistsException
identically to the HMS AlreadyExistsException.

Adds a test using DebugAction to simulate concurrent table creation.

Change-Id: I847eea9297c9ee0d8e821fe1c87ea03d22f1d96e
Reviewed-on: http://gerrit.cloudera.org:8080/21312
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Server exits early failing to create impala_query_log with 
> AlreadyExistsException
> -
>
> Key: IMPALA-13003
> URL: https://issues.apache.org/jira/browse/IMPALA-13003
> Project: IMPALA
>  Issue Type: Bug
>  Components: be
>Affects Versions: Impala 4.4.0
>Reporter: Andrew Sherman
>Assignee: Michael Smith
>Priority: Critical
>  Labels: iceberg, workload-management
> Fix For: Impala 4.4.0
>
>
> At startup workload management tries to create the query log table here:
> {code:java}
>   // The initialization code only works when run in a separate thread for 
> reasons unknown.
>   ABORT_IF_ERROR(SetupDbTable(internal_server_.get(), table_name));
> {code}
> This code is exiting:
> {code:java}
> I0413 23:40:05.183876 21006 client-request-state.cc:1348] 
> 1d4878dbc9214c81:6dc8cc2e] ImpalaRuntimeException: Error making 
> 'createTable' RPC to Hive Metastore:
> CAUSED BY: AlreadyExistsException: Table was created concurrently: 
> sys.impala_query_log
> I0413 23:40:05.184055 20955 impala-server.cc:2582] Connection 
> 27432606d99dcdae:218860164eb206bb from client in-memory.localhost:0 to server 
> internal-server closed. The connection had 1 associated session(s).
> I0413 23:40:05.184067 20955 impala-server.cc:1780] Closing session: 
> 27432606d99dcdae:218860164eb206bb
> I0413 23:40:05.184083 20955 impala-server.cc:1836] Closed session: 
> 27432606d99dcdae:218860164eb206bb, client address: .
> F0413 23:40:05.184111 20955 workload-management.cc:304] query timed out 
> waiting for results
> . Impalad exiting.
> I0413 23:40:05.184728 20883 impala-server.cc:1564] Query successfully 
> unregistered: query_id=1d4878dbc9214c81:6dc8cc2e
> Minidump in thread [20955]completed-queries running query 
> :, fragment instance 
> :
> Wrote minidump to 
> /data/jenkins/workspace/impala-cdw-master-core-ubsan/repos/Impala/logs/custom_cluster_tests/minidumps/impalad/402f37cc-4663-4c78-086ca295-a9e5943c.dmp
> {code}
> with stack
> {code:java}
> F0413 23:40:05.184111 20955 workload-management.cc:304] query timed out 
> waiting for results
> . Impalad exiting.
> *** Check failure stack trace: ***
> @  0x8e96a4d  google::LogMessage::Fail()
> @  0x8e98984  google::LogMessage::SendToLog()
> @  0x8e9642c  google::LogMessage::Flush()
> @  0x8e98ea9  google::LogMessageFatal::~LogMessageFatal()
> @  0x3da3a9a  impala::ImpalaServer::CompletedQueriesThread()
> @  0x3a8df93  boost::_mfi::mf0<>::operator()()
> @  0x3a8de97  boost::_bi::list1<>::operator()<>()
> @  0x3a8dd77  boost::_bi::bind_t<>::operator()()
> @  0x3a8d672  
> boost::detail::function::void_function_obj_invoker0<>::invoke()
> @  0x301e7d0  boost::function0<>::operator()()
> @  0x43ce415  impala::Thread::SuperviseThread()
> @  0x43e2dc7  boost::_bi::list5<>::operator()<>()
> @  0x43e29e7  boost::_bi::bind_t<>::operator()()
> @  0x43e21c5  boost::detail::thread_data<>::run()
> @  0x7984c37  thread_proxy
> @ 0x7f75b6982ea5  start_thread
> @ 0x7f75b36a7b0d  __clone
> Picked up JAVA_TOOL_OPTIONS: 
> -agentlib:jdwp=transport=dt_socket,address=3,server=y,suspend=n   
> -Dsun.java.command=impalad
> Minidump in thread [20955]completed-queries running query 
> :, fragment instance 
> :
> {code}
> I think the key error is 
> 

[jira] [Commented] (IMPALA-13006) Some Iceberg test tables are not restricted to Parquet

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839685#comment-17839685
 ] 

ASF subversion and git services commented on IMPALA-13006:
--

Commit 577b18c3744a4cde5a623d68d37e52d46637a3a6 in impala's branch 
refs/heads/branch-4.4.0 from Noemi Pap-Takacs
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=577b18c37 ]

IMPALA-13006: Restrict Iceberg tables to Parquet

Iceberg test tables/views are restricted to the Parquet file format
in functional/schema_constraints.csv. The following two were
unintentionally left out:
iceberg_query_metadata
iceberg_view

Added the constraint for these tables too.

Testing:
- executed data load for the functional dataset

Change-Id: I2590d7a70fe6aaf1277b19e6b23015d39d2935cb
Reviewed-on: http://gerrit.cloudera.org:8080/21306
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Some Iceberg test tables are not restricted to Parquet
> --
>
> Key: IMPALA-13006
> URL: https://issues.apache.org/jira/browse/IMPALA-13006
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Daniel Becker
>Assignee: Noemi Pap-Takacs
>Priority: Major
>  Labels: impala-iceberg
>
> Our Iceberg test tables/views are restricted to the Parquet file format in 
> functional/schema_constraints.csv except for the following two:
> {code:java}
> iceberg_query_metadata
> iceberg_view{code}
> This is not intentional, so we should add the constraint for these tables too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12874) Identify Catalog HA and StateStore HA from the web debug endpoint

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839691#comment-17839691
 ] 

ASF subversion and git services commented on IMPALA-12874:
--

Commit 6c738bc3fe9d765254b45d62c859275eaaa16a0f in impala's branch 
refs/heads/branch-4.4.0 from Yida Wu
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=6c738bc3f ]

IMPALA-12874: Identify active and standby catalog and statestore in the web 
debug endpoint

This patch adds support to display the HA status of catalog and
statestore on the root web page. The status will be presented
as "Catalog Status: Active" or "Statestore Status: Standby"
based on the values retrieved from the metrics
catalogd-server.active-status and statestore.active-status.

If the catalog or statestore is standalone, it will show active as
the status, which is same as the metric.

Tests:
Ran core tests.
Manually tests the web page, and verified the status display is
correct. Also checked the situation when the failover happens,
the current 'standby' status can be changed to 'active'.

Change-Id: Ie9435ba7a9549ea56f9d080a9315aecbcc630cd2
Reviewed-on: http://gerrit.cloudera.org:8080/21294
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Identify Catalog HA and StateStore HA from the web debug endpoint
> -
>
> Key: IMPALA-12874
> URL: https://issues.apache.org/jira/browse/IMPALA-12874
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: gaurav singh
>Assignee: Yida Wu
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> Identify which Catalog and Statestore instance is active from the web debug 
> endpoint.
> For the HA, we should improve the label on catalog and statestore web 
> response to indicate "Active" instance.
> On the main page we could indicate "Status: Active" or "Status: Stand-by". We 
> could probably add the status at the top of the main page before *"Version"* 
> .  The current details as output of a curl request on port 25020:
> {code:java}
>   Version
>   catalogd version 4.0.0.2024.0.18.0-61 RELEASE (build 
> 82901f3f83fa4c25b318ebf825a1505d89209356)
> Built on Fri Mar  1 20:13:09 UTC 2024
> Build Flags: is_ndebug=true  cmake_build_type=RELEASE  
> library_link_type=STATIC  
> 
>   Process Start Time
> ..
> {code}
> Corresponding curl request on statestored on 25010:
> {code:java}
>   Version
>   statestored version 4.0.0.2024.0.18.0-61 RELEASE 
> (build 82901f3f83fa4c25b318ebf825a1505d89209356)
> Built on Fri Mar  1 20:13:09 UTC 2024
> Build Flags: is_ndebug=true  cmake_build_type=RELEASE  
> library_link_type=STATIC  
> 
>   Process Start Time
> {code}
> Catalogd active status can be figured out using the catalogd metric: 
> "catalog-server.active-status"
> Statestored active status we probably don't have a metric so should add a 
> similar metric and use that to determine status. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12980) Translate CpuAsk into admission control slot to use

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839693#comment-17839693
 ] 

ASF subversion and git services commented on IMPALA-12980:
--

Commit f97042384e0312cfd3943426e048581d9678891d in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f97042384 ]

IMPALA-12980: Translate CpuAsk into admission control slots

Impala has a concept of "admission control slots" - the amount of
parallelism that should be allowed on an Impala daemon. This defaults to
the number of processors per executor and can be overridden with
-–admission_control_slots flag.

Admission control slot accounting is described in IMPALA-8998. It
computes 'slots_to_use' for each backend based on the maximum number of
instances of any fragment on that backend. This can lead to slot
underestimation and query overadmission. For example, assume an executor
node with 48 CPU cores and configured with -–admission_control_slots=48.
It is assigned 4 non-blocking query fragments, each has 12 instances
scheduled in this executor. IMPALA-8998 algorithm will request the max
instance (12) slots rather than the sum of all non-blocking fragment
instances (48). With the 36 remaining slots free, the executor can still
admit another fragment from a different query but will potentially have
CPU contention with the one that is currently running.

When COMPUTE_PROCESSING_COST is enabled, Planner will generate a CpuAsk
number that represents the cpu requirement of that query over a
particular executor group set. This number is an estimation of the
largest number of query fragment instances that can run in parallel
without waiting, given by the blocking operator analysis. Therefore, the
fragment trace that sums into that CpuAsk number can be translated into
'slots_to_use' as well, which will be a closer resemblance of maximum
parallel execution of fragment instances.

This patch adds a new query option called SLOT_COUNT_STRATEGY to control
which admission control slot accounting to use. There are two possible
values:
- LARGEST_FRAGMENT, which is the original algorithm from IMPALA-8998.
  This is still the default value for the SLOT_COUNT_STRATEGY option.
- PLANNER_CPU_ASK, which will follow the fragment trace that contributes
  towards CpuAsk number. This strategy will schedule more or equal
  admission control slots than the LARGEST_FRAGMENT strategy.

To do the PLANNER_CPU_ASK strategy, the Planner will mark fragments that
contribute to CpuAsk as dominant fragments. It also passes
max_slot_per_executor information that it knows about the executor group
set to the scheduler.

AvgAdmissionSlotsPerExecutor counter is added to describe what Planner
thinks the average 'slots_to_use' per backend will be, which follows
this formula:

  AvgAdmissionSlotsPerExecutor = ceil(CpuAsk / num_executors)

Actual 'slots_to_use' in each backend may differ than
AvgAdmissionSlotsPerExecutor, depending on what is scheduled on that
backend. 'slots_to_use' will be shown as 'AdmissionSlots' counter under
each executor profile node.

Testing:
- Update test_executors.py with AvgAdmissionSlotsPerExecutor assertion.
- Pass test_tpcds_queries.py::TestTpcdsQueryWithProcessingCost.
- Add EE test test_processing_cost.py.
- Add FE test PlannerTest#testProcessingCostPlanAdmissionSlots.

Change-Id: I338ca96555bfe8d07afce0320b3688a0861663f2
Reviewed-on: http://gerrit.cloudera.org:8080/21257
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Translate CpuAsk into admission control slot to use
> ---
>
> Key: IMPALA-12980
> URL: https://issues.apache.org/jira/browse/IMPALA-12980
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Distributed Exec, Frontend
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> Admission control slot accounting is described in IMPALA-8998. On each 
> backend, number of. It compute 'slots_to_use' for each backend based on the 
> max number of instances of any fragment on that backend. This is simplistic, 
> because multiple fragments with same number of instance count, say 4 
> non-blocking fragments each with 12 instances, only request the max instance 
> (12) admission slots rather than sum of it (48).
> When COMPUTE_PROCESSING_COST is enabled, Planner will generate a CpuAsk 
> number that represent the cpu requirement of that query over a particular 
> executor group set. This number is an estimation of what is the largest 
> number of query fragments that can run in-parallel given the blocking 
> operator analysis. Therefore, the fragment trace that sums into that CpuAsk 
> number can be translated into 'slots_to_use' as well, which will be a 

[jira] [Commented] (IMPALA-12657) Improve ProcessingCost of ScanNode and NonGroupingAggregator

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839698#comment-17839698
 ] 

ASF subversion and git services commented on IMPALA-12657:
--

Commit af2d28b54d4fb6b182b27a278c2cfa6277f3 in impala's branch 
refs/heads/branch-4.4.0 from David Rorke
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=af2d28b54 ]

IMPALA-12657: Improve ProcessingCost of ScanNode and NonGroupingAggregator

This patch improves the accuracy of the CPU ProcessingCost estimates for
several of the CPU intensive operators by basing the costs on benchmark
data. The general approach for a given operator was to run a set of queries
that exercised the operator under various conditions (e.g. large vs small
row sizes and row counts, varying NDV, different file formats, etc) and
capture the CPU time spent per unit of work (the unit of work might be
measured as some number of rows, some number of bytes, some number of
predicates evaluated, or some combination of these). The data was then
analyzed in an attempt to fit a simple model that would allow us to
predict CPU consumption of a given operator based on information available
at planning time.

For example, the CPU ProcessingCost for a Parquet scan is estimated as:
TotalCost = (0.0144 * BytesMaterialized) + (0.0281 * Rows * Predicate Count)

The coefficients  (0.0144 and 0.0281) are derived from benchmarking
scans under a variety of conditions. Similar cost functions and coefficients
were derived for all of the benchmarked operators. The coefficients for all
the operators are normalized such that a single unit of cost equates to
roughly 100 nanoseconds of CPU time on a r5d.4xlarge instance. So we would
predict an operator with a cost of 10,000,000 would complete in roughly one
second on a single core.

Limitations:
* Costing only addresses CPU time spent and doesn't account for any IO
  or other wait time.
* Benchmarking scenarios didn't provide comprehensive coverage of the
  full range of data types, distributions, etc. More thorough
  benchmarking could improve the costing estimates further.
* This initial patch only covers a subset of the operators, focusing
  on those that are most common and most CPU intensive. Specifically
  the following operators are covered by this patch. All others
  continue to use the previous ProcessingCost code:
  AggregationNode
  DataStreamSink (exchange sender)
  ExchangeNode
  HashJoinNode
  HdfsScanNode
  HdfsTableSink
  NestedLoopJoinNode
  SortNode
  UnionNode

Benchmark-based costing of the remaining operators will be covered by
a future patch.

Future patches will automate the collection and analysis of the benchmark
data and the computation of the cost coefficients to simplify maintenance
of the costing as performance changes over time.

Change-Id: Icf1edd48d4ae255b7b3b7f5b228800d7bac7d2ca
Reviewed-on: http://gerrit.cloudera.org:8080/21279
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Improve ProcessingCost of ScanNode and NonGroupingAggregator
> 
>
> Key: IMPALA-12657
> URL: https://issues.apache.org/jira/browse/IMPALA-12657
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 4.3.0
>Reporter: Riza Suminto
>Assignee: David Rorke
>Priority: Major
> Fix For: Impala 4.4.0
>
> Attachments: profile_1f4d7a679a3e12d5_42231157.txt
>
>
> Several benchmark run measuring Impala scan performance indicates some 
> costing improvement opportunity around ScanNode and NonGroupingAggregator.
> [^profile_1f4d7a679a3e12d5_42231157.txt] shows an example of simple 
> count query.
> Key takeaway:
>  # There is a strong correlation between total materialized bytes (row-size * 
> cardinality) with total materialized tuple time per fragment. Row 
> materialization cost should be adjusted to be based on this row-sized instead 
> of equal cost per scan range.
>  # NonGroupingAggregator should have much lower cost that GroupingAggregator. 
> In example above, the cost of NonGroupingAggregator dominates the scan 
> fragment even though it only does simple counting instead of hash table 
> operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12543) test_iceberg_self_events failed in JDK11 build

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839700#comment-17839700
 ] 

ASF subversion and git services commented on IMPALA-12543:
--

Commit e1bbdacc5133d36d997e0e19c52753df90376a1e in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e1bbdacc5 ]

IMPALA-12543: Detect self-events before finishing DDL

test_iceberg_self_events has been flaky for not having
tbls_refreshed_before equal to tbls_refreshed_after in-between query
executions.

Further investigation reveals concurrency bug due to db/table level
lock is not taken during db/table self-events check (IMPALA-12461
part1). The order of ALTER TABLE operation is as follow:

1. alter table starts in CatalogOpExecutor
2. table level lock is taken
3. HMS RPC starts (CatalogOpExecutor.applyAlterTable())
4. HMS generates the event
5. HMS RPC returns
6. table is reloaded
7. catalog version is added to inflight event list
8. table level lock is released

Meanwhile the event processor thread fetches the new event after 4 and
before 7. Because of IMPALA-12461 (part 1), it can also finish
self-events checking before reaching 7. Before IMPALA-12461, self-events
would have needed to wait for 8. Note that this issue is only relevant
for table level events, as self-events checking for partition level
events still takes table lock.

This patch fix the issue by adding newCatalogVersion to the table's
inflight event list before updating HMS using helper class
InProgressTableModification. If HMS update does not complete (ie., an
exception is thrown), the new newCatalogVersion that was added is then
removed.

This patch also fix few smaller issues, including:
- Avoid incrementing EVENTS_SKIPPED_METRIC if numFilteredEvents == 0 in
  MetastoreEventFactory.getFilteredEvents().
- Increment EVENTS_SKIPPED_METRIC in
  MetastoreTableEvent.reloadTableFromCatalog() if table is already in
  the middle of reloading (revealed through flaky
  test_skipping_older_events).
- Rephrase misleading log message in
  MetastoreEventProcessor.getNextMetastoreEvents().

Testing:
- Add TestEventProcessingWithImpala, run it with debug_action
  and sync_ddl dimensions.
- Pass exhaustive tests.

Change-Id: I8365c934349ad21a4d9327fc11594d2fc3445f79
Reviewed-on: http://gerrit.cloudera.org:8080/21029
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> test_iceberg_self_events failed in JDK11 build
> --
>
> Key: IMPALA-12543
> URL: https://issues.apache.org/jira/browse/IMPALA-12543
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>  Labels: broken-build
> Fix For: Impala 4.4.0
>
> Attachments: catalogd.INFO, std_err.txt
>
>
> test_iceberg_self_events failed in JDK11 build with following error.
>  
> {code:java}
> Error Message
> assert 0 == 1
> Stacktrace
> custom_cluster/test_events_custom_configs.py:637: in test_iceberg_self_events
>     check_self_events("ALTER TABLE {0} ADD COLUMN j INT".format(tbl_name))
> custom_cluster/test_events_custom_configs.py:624: in check_self_events
>     assert tbls_refreshed_before == tbls_refreshed_after
> E   assert 0 == 1 {code}
> This test still pass before IMPALA-11387 merged.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12990) impala-shell broken if Iceberg delete deletes 0 rows

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839688#comment-17839688
 ] 

ASF subversion and git services commented on IMPALA-12990:
--

Commit 858794906158439de8b38a66739d62e1231bbd14 in impala's branch 
refs/heads/branch-4.4.0 from Csaba Ringhofer
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=858794906 ]

IMPALA-12990: Fix impala-shell handling of unset rows_deleted

The issue occurred in Python 3 when 0 rows were deleted from Iceberg.
It could also happen in other DMLs with older Impala servers where
TDmlResult.rows_deleted was not set. See the Jira for details of
the error.

Testing:
Extended shell tests for Kudu DML reporting to also cover Iceberg.

Change-Id: I5812b8006b9cacf34a7a0dbbc89a486d8b454438
Reviewed-on: http://gerrit.cloudera.org:8080/21284
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> impala-shell broken if Iceberg delete deletes 0 rows
> 
>
> Key: IMPALA-12990
> URL: https://issues.apache.org/jira/browse/IMPALA-12990
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Csaba Ringhofer
>Assignee: Csaba Ringhofer
>Priority: Major
>  Labels: iceberg
>
> Happens only with Python 3
> {code}
> impala-python3 shell/impala_shell.py
> create table icebergupdatet (i int, s string) stored as iceberg;
> alter table icebergupdatet set tblproperties("format-version"="2");
> delete from icebergupdatet where i=0;
> Unknown Exception : '>' not supported between instances of 'NoneType' and 
> 'int'
> Traceback (most recent call last):
>   File "shell/impala_shell.py", line 1428, in _execute_stmt
> if is_dml and num_rows == 0 and num_deleted_rows > 0:
> TypeError: '>' not supported between instances of 'NoneType' and 'int'
> {code}
> The same erros should also happen when the delete removes > 0 rows, but the 
> impala server has an older version that doesn't set TDmlResult.rows_deleted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13008) test_metadata_tables failed in Ubuntu 20 build

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839689#comment-17839689
 ] 

ASF subversion and git services commented on IMPALA-13008:
--

Commit ce863e8c7ffd3d5abdc2c69b06042b711a0f707f in impala's branch 
refs/heads/branch-4.4.0 from Daniel Becker
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ce863e8c7 ]

IMPALA-13008: test_metadata_tables failed in Ubuntu 20 build

TestIcebergV2Table::test_metadata_tables failed in Ubuntu 20 build in a
release candidate because the file sizes in some queries didn't match
the expected ones. As Impala writes its version into the Parquet files
it writes, the file sizes can change with the release (especially as
SNAPSHOT or RELEASE is part of the full version, and their lengths
differ).

This change updates the failing tests to take regexes for the file sizes
instead of concrete values.

Change-Id: Iad8fd0d9920034e7dbe6c605bed7579fbe3b5b1f
Reviewed-on: http://gerrit.cloudera.org:8080/21317
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_metadata_tables failed in Ubuntu 20 build
> --
>
> Key: IMPALA-13008
> URL: https://issues.apache.org/jira/browse/IMPALA-13008
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Assignee: Daniel Becker
>Priority: Major
>  Labels: impala-iceberg
>
> test_metadata_tables failed in an Ubuntu 20 release test build:
> * 
> https://jenkins.impala.io/job/parallel-all-tests-ub2004/1059/artifact/https_%5E%5Ejenkins.impala.io%5Ejob%5Eubuntu-20.04-dockerised-tests%5E1642%5E.log
> * 
> https://jenkins.impala.io/job/parallel-all-tests-ub2004/1059/artifact/https_%5E%5Ejenkins.impala.io%5Ejob%5Eubuntu-20.04-from-scratch%5E2363%5E.log
> h2. Error
> {noformat}
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"351","changed-partition-count":"1","total-records":"1","total-files-size":"351","total-data-files":"1","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
>  != 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"350","changed-partition-count":"1","total-records":"1","total-files-size":"350","total-data-files":"1","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
> E 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"351","changed-partition-count":"1","total-records":"2","total-files-size":"702","total-data-files":"2","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
>  != 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"350","changed-partition-count":"1","total-records":"2","total-files-size":"700","total-data-files":"2","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
> E 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"351","changed-partition-count":"1","total-records":"3","total-files-size":"1053","total-data-files":"3","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
>  != 
> 'append',true,'{"added-data-files":"1","added-records":"1","added-files-size":"350","changed-partition-count":"1","total-records":"3","total-files-size":"1050","total-data-files":"3","total-delete-files":"0","total-position-deletes":"0","total-equality-deletes":"0"}'
> E 
> row_regex:'overwrite',true,'{"added-position-delete-files":"1","added-delete-files":"1","added-files-size":"[1-9][0-9]*","added-position-deletes":"1","changed-partition-count":"1","total-records":"3","total-files-size":"[1-9][0-9]*","total-data-files":"3","total-delete-files":"1","total-position-deletes":"1","total-equality-deletes":"0"}'
>  == 
> 'overwrite',true,'{"added-position-delete-files":"1","added-delete-files":"1","added-files-size":"1551","added-position-deletes":"1","changed-partition-count":"1","total-records":"3","total-files-size":"2601","total-data-files":"3","total-delete-files":"1","total-position-deletes":"1","total-equality-deletes":"0"}'
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12461) Avoid write lock on the table during self-event detection

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839701#comment-17839701
 ] 

ASF subversion and git services commented on IMPALA-12461:
--

Commit e1bbdacc5133d36d997e0e19c52753df90376a1e in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=e1bbdacc5 ]

IMPALA-12543: Detect self-events before finishing DDL

test_iceberg_self_events has been flaky for not having
tbls_refreshed_before equal to tbls_refreshed_after in-between query
executions.

Further investigation reveals concurrency bug due to db/table level
lock is not taken during db/table self-events check (IMPALA-12461
part1). The order of ALTER TABLE operation is as follow:

1. alter table starts in CatalogOpExecutor
2. table level lock is taken
3. HMS RPC starts (CatalogOpExecutor.applyAlterTable())
4. HMS generates the event
5. HMS RPC returns
6. table is reloaded
7. catalog version is added to inflight event list
8. table level lock is released

Meanwhile the event processor thread fetches the new event after 4 and
before 7. Because of IMPALA-12461 (part 1), it can also finish
self-events checking before reaching 7. Before IMPALA-12461, self-events
would have needed to wait for 8. Note that this issue is only relevant
for table level events, as self-events checking for partition level
events still takes table lock.

This patch fix the issue by adding newCatalogVersion to the table's
inflight event list before updating HMS using helper class
InProgressTableModification. If HMS update does not complete (ie., an
exception is thrown), the new newCatalogVersion that was added is then
removed.

This patch also fix few smaller issues, including:
- Avoid incrementing EVENTS_SKIPPED_METRIC if numFilteredEvents == 0 in
  MetastoreEventFactory.getFilteredEvents().
- Increment EVENTS_SKIPPED_METRIC in
  MetastoreTableEvent.reloadTableFromCatalog() if table is already in
  the middle of reloading (revealed through flaky
  test_skipping_older_events).
- Rephrase misleading log message in
  MetastoreEventProcessor.getNextMetastoreEvents().

Testing:
- Add TestEventProcessingWithImpala, run it with debug_action
  and sync_ddl dimensions.
- Pass exhaustive tests.

Change-Id: I8365c934349ad21a4d9327fc11594d2fc3445f79
Reviewed-on: http://gerrit.cloudera.org:8080/21029
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Avoid write lock on the table during self-event detection
> -
>
> Key: IMPALA-12461
> URL: https://issues.apache.org/jira/browse/IMPALA-12461
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Csaba Ringhofer
>Assignee: Csaba Ringhofer
>Priority: Critical
>
> Saw some callstacks like this:
> {code}
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.tryLock(CatalogServiceCatalog.java:468)
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.tryWriteLock(CatalogServiceCatalog.java:436)
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.evaluateSelfEvent(CatalogServiceCatalog.java:1008)
>     at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.isSelfEvent(MetastoreEvents.java:609)
>     at 
> org.apache.impala.catalog.events.MetastoreEvents$BatchPartitionEvent.process(MetastoreEvents.java:1942)
> {code}
> At this point it was already checked that the event comes from Impala based 
> on service id and now we are checking the table's self event list. Taking the 
> table lock can be problematic as other DDL may took write lock at the same 
> time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12988) Calculate an unbounded version of CpuAsk

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839697#comment-17839697
 ] 

ASF subversion and git services commented on IMPALA-12988:
--

Commit c8149d14127968eb9a4d26a623fa6cc82762216e in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=c8149d141 ]

IMPALA-12988: Calculate an unbounded version of CpuAsk

Planner calculates CpuAsk through a recursive call beginning at
Planner.computeBlockingAwareCores(), which is called after
Planner.computeEffectiveParallelism(). It does blocking operator
analysis over the selected degree of parallelism that was decided during
computeEffectiveParallelism() traversal. That selected degree of
parallelism, however, is already bounded by min and max parallelism
config, derived from PROCESSING_COST_MIN_THREADS and
MAX_FRAGMENT_INSTANCES_PER_NODE options accordingly.

This patch calculates an unbounded version of CpuAsk that is not bounded
by min and max parallelism config. It is purely based on the fragment's
ProcessingCost and query plan relationship constraint (for example, the
number of JOIN BUILDER fragments should equal the number of destination
JOIN fragments for partitioned join).

Frontend will receive both bounded and unbounded CpuAsk values from
TQueryExecRequest on each executor group set selection round. The
unbounded CpuAsk is then scaled down once using a nth root based
sublinear-function, controlled by the total cpu count of the smallest
executor group set and the bounded CpuAsk number. Another linear scaling
is then applied on both bounded and unbounded CpuAsk using
QUERY_CPU_COUNT_DIVISOR option. Frontend then compare the unbounded
CpuAsk after scaling against CpuMax to avoid assigning a query to a
small executor group set too soon. The last executor group set stays as
the "catch-all" executor group set.

After this patch, setting COMPUTE_PROCESSING_COST=True will show
following changes in query profile:
- The "max-parallelism" fields in the query plan will all be set to
  maximum parallelism based on ProcessingCost.
- The CpuAsk counter is changed to show the unbounded CpuAsk after
  scaling.
- A new counter CpuAskBounded shows the bounded CpuAsk after scaling. If
  QUERY_CPU_COUNT_DIVISOR=1 and PLANNER_CPU_ASK slot counting strategy
  is selected, this CpuAskBounded is also the minimum total admission
  slots given to the query.
- A new counter MaxParallelism shows the unbounded CpuAsk before
  scaling.
- The EffectiveParallelism counter remains unchanged,
  showing bounded CpuAsk before scaling.

Testing:
- Update and pass FE test TpcdsCpuCostPlannerTest and
  PlannerTest#testProcessingCost.
- Pass EE test tests/query_test/test_tpcds_queries.py
- Pass custom cluster test tests/custom_cluster/test_executor_groups.py

Change-Id: I5441e31088f90761062af35862be4ce09d116923
Reviewed-on: http://gerrit.cloudera.org:8080/21277
Reviewed-by: Kurt Deschler 
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 


> Calculate an unbounded version of CpuAsk
> 
>
> Key: IMPALA-12988
> URL: https://issues.apache.org/jira/browse/IMPALA-12988
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> CpuAsk is calculated through recursive call beginning at 
> Planner.computeBlockingAwareCores(), which called after 
> Planner.computeEffectiveParallelism(). It does blocking operator analysis 
> over selected degree of parallelism that decided during 
> computeEffectiveParallelism() traversal. That selected degree of parallelism, 
> however, is already bounded by min and max parallelism config, derived from 
> PROCESSING_COST_MIN_THREADS and MAX_FRAGMENT_INSTANCES_PER_NODE options 
> accordingly.
> It is beneficial to have another version of CpuAsk that is not bounded by min 
> and max parallelism config. It should purely based on the fragment's 
> ProcessingCost and query plan relationship constraint (ie., num JOIN BUILDER 
> fragment should be equal as num JOIN fragment for partitioned join). During 
> executor group set selection, Frontend should use the unbounded CpuAsk number 
> to avoid assigning query to small executor group set prematurely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8998) Admission control accounting for mt_dop

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839694#comment-17839694
 ] 

ASF subversion and git services commented on IMPALA-8998:
-

Commit f97042384e0312cfd3943426e048581d9678891d in impala's branch 
refs/heads/branch-4.4.0 from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=f97042384 ]

IMPALA-12980: Translate CpuAsk into admission control slots

Impala has a concept of "admission control slots" - the amount of
parallelism that should be allowed on an Impala daemon. This defaults to
the number of processors per executor and can be overridden with
-–admission_control_slots flag.

Admission control slot accounting is described in IMPALA-8998. It
computes 'slots_to_use' for each backend based on the maximum number of
instances of any fragment on that backend. This can lead to slot
underestimation and query overadmission. For example, assume an executor
node with 48 CPU cores and configured with -–admission_control_slots=48.
It is assigned 4 non-blocking query fragments, each has 12 instances
scheduled in this executor. IMPALA-8998 algorithm will request the max
instance (12) slots rather than the sum of all non-blocking fragment
instances (48). With the 36 remaining slots free, the executor can still
admit another fragment from a different query but will potentially have
CPU contention with the one that is currently running.

When COMPUTE_PROCESSING_COST is enabled, Planner will generate a CpuAsk
number that represents the cpu requirement of that query over a
particular executor group set. This number is an estimation of the
largest number of query fragment instances that can run in parallel
without waiting, given by the blocking operator analysis. Therefore, the
fragment trace that sums into that CpuAsk number can be translated into
'slots_to_use' as well, which will be a closer resemblance of maximum
parallel execution of fragment instances.

This patch adds a new query option called SLOT_COUNT_STRATEGY to control
which admission control slot accounting to use. There are two possible
values:
- LARGEST_FRAGMENT, which is the original algorithm from IMPALA-8998.
  This is still the default value for the SLOT_COUNT_STRATEGY option.
- PLANNER_CPU_ASK, which will follow the fragment trace that contributes
  towards CpuAsk number. This strategy will schedule more or equal
  admission control slots than the LARGEST_FRAGMENT strategy.

To do the PLANNER_CPU_ASK strategy, the Planner will mark fragments that
contribute to CpuAsk as dominant fragments. It also passes
max_slot_per_executor information that it knows about the executor group
set to the scheduler.

AvgAdmissionSlotsPerExecutor counter is added to describe what Planner
thinks the average 'slots_to_use' per backend will be, which follows
this formula:

  AvgAdmissionSlotsPerExecutor = ceil(CpuAsk / num_executors)

Actual 'slots_to_use' in each backend may differ than
AvgAdmissionSlotsPerExecutor, depending on what is scheduled on that
backend. 'slots_to_use' will be shown as 'AdmissionSlots' counter under
each executor profile node.

Testing:
- Update test_executors.py with AvgAdmissionSlotsPerExecutor assertion.
- Pass test_tpcds_queries.py::TestTpcdsQueryWithProcessingCost.
- Add EE test test_processing_cost.py.
- Add FE test PlannerTest#testProcessingCostPlanAdmissionSlots.

Change-Id: I338ca96555bfe8d07afce0320b3688a0861663f2
Reviewed-on: http://gerrit.cloudera.org:8080/21257
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Admission control accounting for mt_dop
> ---
>
> Key: IMPALA-8998
> URL: https://issues.apache.org/jira/browse/IMPALA-8998
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Tim Armstrong
>Priority: Major
> Fix For: Impala 3.4.0
>
>
> We should account for the degree of parallelism that the query runs with on a 
> backend to avoid overadmitting too many parallel queries. 
> We could probably simply count the effective degree of parallelism (max # 
> instances of a fragment on that backend) toward the number of slots in 
> admission control (although slots are not enabled for the default group yet - 
> see IMPALA-8757).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11495) Add glibc version and effective locale to the Web UI

2024-04-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839684#comment-17839684
 ] 

ASF subversion and git services commented on IMPALA-11495:
--

Commit 43690811192bce9f921244efa5934851ad1e4529 in impala's branch 
refs/heads/branch-4.4.0 from Saurabh Katiyal
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=436908111 ]

IMPALA-11495: Add glibc version and effective locale to the Web UI

Added a new section "Other Info" in root page for WebUI,
displaying effective locale and glibc version.

Change-Id: Ia69c4d63df4beae29f5261691a8dcdd04b931de7
Reviewed-on: http://gerrit.cloudera.org:8080/21252
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Add glibc version and effective locale to the Web UI
> 
>
> Key: IMPALA-11495
> URL: https://issues.apache.org/jira/browse/IMPALA-11495
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Reporter: Quanlong Huang
>Assignee: Saurabh Katiyal
>Priority: Major
>  Labels: newbie, observability, supportability
> Fix For: Impala 4.4.0
>
>
> When debugging utf8 mode string functions, it's essential to know the 
> effective Unicode version and locale. The Unicode standard version can be 
> deduced from the glibc version which can be got by command "ldd --version". 
> We need to find a programmatic way to get it.
> The effective locale is already logged here:
> https://github.com/apache/impala/blob/ba4cb95b6251911fa9e057cea1cb37958d339fed/be/src/common/init.cc#L406
> We just need to show it in impalad's Web UI as well.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12461) Avoid write lock on the table during self-event detection

2024-04-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839455#comment-17839455
 ] 

ASF subversion and git services commented on IMPALA-12461:
--

Commit 93278cccf0abca59be29ef2963e79294ca9e0885 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=93278cccf ]

IMPALA-12543: Detect self-events before finishing DDL

test_iceberg_self_events has been flaky for not having
tbls_refreshed_before equal to tbls_refreshed_after in-between query
executions.

Further investigation reveals concurrency bug due to db/table level
lock is not taken during db/table self-events check (IMPALA-12461
part1). The order of ALTER TABLE operation is as follow:

1. alter table starts in CatalogOpExecutor
2. table level lock is taken
3. HMS RPC starts (CatalogOpExecutor.applyAlterTable())
4. HMS generates the event
5. HMS RPC returns
6. table is reloaded
7. catalog version is added to inflight event list
8. table level lock is released

Meanwhile the event processor thread fetches the new event after 4 and
before 7. Because of IMPALA-12461 (part 1), it can also finish
self-events checking before reaching 7. Before IMPALA-12461, self-events
would have needed to wait for 8. Note that this issue is only relevant
for table level events, as self-events checking for partition level
events still takes table lock.

This patch fix the issue by adding newCatalogVersion to the table's
inflight event list before updating HMS using helper class
InProgressTableModification. If HMS update does not complete (ie., an
exception is thrown), the new newCatalogVersion that was added is then
removed.

This patch also fix few smaller issues, including:
- Avoid incrementing EVENTS_SKIPPED_METRIC if numFilteredEvents == 0 in
  MetastoreEventFactory.getFilteredEvents().
- Increment EVENTS_SKIPPED_METRIC in
  MetastoreTableEvent.reloadTableFromCatalog() if table is already in
  the middle of reloading (revealed through flaky
  test_skipping_older_events).
- Rephrase misleading log message in
  MetastoreEventProcessor.getNextMetastoreEvents().

Testing:
- Add TestEventProcessingWithImpala, run it with debug_action
  and sync_ddl dimensions.
- Pass exhaustive tests.

Change-Id: I8365c934349ad21a4d9327fc11594d2fc3445f79
Reviewed-on: http://gerrit.cloudera.org:8080/21029
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Avoid write lock on the table during self-event detection
> -
>
> Key: IMPALA-12461
> URL: https://issues.apache.org/jira/browse/IMPALA-12461
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Csaba Ringhofer
>Assignee: Csaba Ringhofer
>Priority: Critical
>
> Saw some callstacks like this:
> {code}
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.tryLock(CatalogServiceCatalog.java:468)
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.tryWriteLock(CatalogServiceCatalog.java:436)
>     at 
> org.apache.impala.catalog.CatalogServiceCatalog.evaluateSelfEvent(CatalogServiceCatalog.java:1008)
>     at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.isSelfEvent(MetastoreEvents.java:609)
>     at 
> org.apache.impala.catalog.events.MetastoreEvents$BatchPartitionEvent.process(MetastoreEvents.java:1942)
> {code}
> At this point it was already checked that the event comes from Impala based 
> on service id and now we are checking the table's self event list. Taking the 
> table lock can be problematic as other DDL may took write lock at the same 
> time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12543) test_iceberg_self_events failed in JDK11 build

2024-04-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839454#comment-17839454
 ] 

ASF subversion and git services commented on IMPALA-12543:
--

Commit 93278cccf0abca59be29ef2963e79294ca9e0885 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=93278cccf ]

IMPALA-12543: Detect self-events before finishing DDL

test_iceberg_self_events has been flaky for not having
tbls_refreshed_before equal to tbls_refreshed_after in-between query
executions.

Further investigation reveals concurrency bug due to db/table level
lock is not taken during db/table self-events check (IMPALA-12461
part1). The order of ALTER TABLE operation is as follow:

1. alter table starts in CatalogOpExecutor
2. table level lock is taken
3. HMS RPC starts (CatalogOpExecutor.applyAlterTable())
4. HMS generates the event
5. HMS RPC returns
6. table is reloaded
7. catalog version is added to inflight event list
8. table level lock is released

Meanwhile the event processor thread fetches the new event after 4 and
before 7. Because of IMPALA-12461 (part 1), it can also finish
self-events checking before reaching 7. Before IMPALA-12461, self-events
would have needed to wait for 8. Note that this issue is only relevant
for table level events, as self-events checking for partition level
events still takes table lock.

This patch fix the issue by adding newCatalogVersion to the table's
inflight event list before updating HMS using helper class
InProgressTableModification. If HMS update does not complete (ie., an
exception is thrown), the new newCatalogVersion that was added is then
removed.

This patch also fix few smaller issues, including:
- Avoid incrementing EVENTS_SKIPPED_METRIC if numFilteredEvents == 0 in
  MetastoreEventFactory.getFilteredEvents().
- Increment EVENTS_SKIPPED_METRIC in
  MetastoreTableEvent.reloadTableFromCatalog() if table is already in
  the middle of reloading (revealed through flaky
  test_skipping_older_events).
- Rephrase misleading log message in
  MetastoreEventProcessor.getNextMetastoreEvents().

Testing:
- Add TestEventProcessingWithImpala, run it with debug_action
  and sync_ddl dimensions.
- Pass exhaustive tests.

Change-Id: I8365c934349ad21a4d9327fc11594d2fc3445f79
Reviewed-on: http://gerrit.cloudera.org:8080/21029
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> test_iceberg_self_events failed in JDK11 build
> --
>
> Key: IMPALA-12543
> URL: https://issues.apache.org/jira/browse/IMPALA-12543
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>  Labels: broken-build
> Attachments: catalogd.INFO, std_err.txt
>
>
> test_iceberg_self_events failed in JDK11 build with following error.
>  
> {code:java}
> Error Message
> assert 0 == 1
> Stacktrace
> custom_cluster/test_events_custom_configs.py:637: in test_iceberg_self_events
>     check_self_events("ALTER TABLE {0} ADD COLUMN j INT".format(tbl_name))
> custom_cluster/test_events_custom_configs.py:624: in check_self_events
>     assert tbls_refreshed_before == tbls_refreshed_after
> E   assert 0 == 1 {code}
> This test still pass before IMPALA-11387 merged.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12933) Catalogd should set eventTypeSkipList when fetching specifit events for a table

2024-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839297#comment-17839297
 ] 

ASF subversion and git services commented on IMPALA-12933:
--

Commit db09d58ef767b2b759792412efcc9481777c464b in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=db09d58ef ]

IMPALA-12933: Avoid fetching unneccessary events of unwanted types

There are several places where catalogd will fetch all events of a
specific type on a table. E.g. in TableLoader#load(), if the table has
an old createEventId, catalogd will fetch all CREATE_TABLE events after
that createEventId on the table.

Fetching the list of events is expensive since the filtering is done on
client side, i.e. catalogd fetches all events and filter them locally
based on the event type and table name. This could take hours if there
are lots of events (e.g 1M) in HMS.

This patch sets the eventTypeSkipList with the complement set of the
wanted type. So the get_next_notification RPC can filter out some events
on HMS side. To avoid bringing too much computation overhead to HMS's
underlying RDBMS in evaluating predicates of EVENT_TYPE != 'xxx', rare
event types (e.g. DROP_ISCHEMA) are not added in the list. A new flag,
common_hms_event_types, is added to specify the common HMS event types.

Once HIVE-28146 is resolved, we can set the wanted types directly in the
HMS RPC and this approach can be simplified.

UPDATE_TBL_COL_STAT_EVENT, UPDATE_PART_COL_STAT_EVENT are the most
common unused events for Impala. They are also added to the default skip
list. A new flag, default_skipped_hms_event_types, is added to configure
this list.

This patch also fixes an issue that events of the non-default catalog
are not filtered out.

In a local perf test, I generated 100K RELOAD events after creating a
table in Hive. Then use the table in Impala to trigger metadata loading
on it which will fetch the latest CREATE_TABLE event by polling all
events after the last known CREATE_TABLE event. Before this patch,
fetching the events takes 1s779ms. Now it takes only 395.377ms. Note
that in prod env, the event messages are usually larger, we could have
a larger speedup.

Tests:
 - Added an FE test
 - Ran CORE tests

Change-Id: Ieabe714328aa2cc605cb62b85ae8aa4bd537dbe9
Reviewed-on: http://gerrit.cloudera.org:8080/21186
Reviewed-by: Csaba Ringhofer 
Tested-by: Impala Public Jenkins 


> Catalogd should set eventTypeSkipList when fetching specifit events for a 
> table
> ---
>
> Key: IMPALA-12933
> URL: https://issues.apache.org/jira/browse/IMPALA-12933
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> There are several places that catalogd will fetch all events of a specifit 
> type on a table. E.g. in TableLoader#load(), if the table has an old 
> createEventId, catalogd will fetch all CREATE_TABLE events after that 
> createEventId on the table.
> Fetching the list of events is expensive since the filtering is done on 
> client side, i.e. catalogd fetch all events and filter them locally based on 
> the event type and table name:
> [https://github.com/apache/impala/blob/14e3ed4f97292499b2e6ee8d5a756dc648d9/fe/src/main/java/org/apache/impala/catalog/TableLoader.java#L98-L102]
> [https://github.com/apache/impala/blob/b7ddbcad0dd6accb559a3f391a897a8c442d1728/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java#L336]
> This could take hours if there are lots of events (e.g 1M) in HMS. In fact, 
> NotificationEventRequest can specify an eventTypeSkipList. Catalogd can do 
> the filtering of event type in HMS side. On higher Hive versions that have 
> HIVE-27499, catalogd can also specify the table name in the request 
> (IMPALA-12607).
> This Jira focus on specifying the eventTypeSkipList when fetching events of a 
> particular type on a table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12657) Improve ProcessingCost of ScanNode and NonGroupingAggregator

2024-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839296#comment-17839296
 ] 

ASF subversion and git services commented on IMPALA-12657:
--

Commit 25a8d70664ce4c91634e1377e020a4d7fa8b09c0 in impala's branch 
refs/heads/master from David Rorke
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=25a8d7066 ]

IMPALA-12657: Improve ProcessingCost of ScanNode and NonGroupingAggregator

This patch improves the accuracy of the CPU ProcessingCost estimates for
several of the CPU intensive operators by basing the costs on benchmark
data. The general approach for a given operator was to run a set of queries
that exercised the operator under various conditions (e.g. large vs small
row sizes and row counts, varying NDV, different file formats, etc) and
capture the CPU time spent per unit of work (the unit of work might be
measured as some number of rows, some number of bytes, some number of
predicates evaluated, or some combination of these). The data was then
analyzed in an attempt to fit a simple model that would allow us to
predict CPU consumption of a given operator based on information available
at planning time.

For example, the CPU ProcessingCost for a Parquet scan is estimated as:
TotalCost = (0.0144 * BytesMaterialized) + (0.0281 * Rows * Predicate Count)

The coefficients  (0.0144 and 0.0281) are derived from benchmarking
scans under a variety of conditions. Similar cost functions and coefficients
were derived for all of the benchmarked operators. The coefficients for all
the operators are normalized such that a single unit of cost equates to
roughly 100 nanoseconds of CPU time on a r5d.4xlarge instance. So we would
predict an operator with a cost of 10,000,000 would complete in roughly one
second on a single core.

Limitations:
* Costing only addresses CPU time spent and doesn't account for any IO
  or other wait time.
* Benchmarking scenarios didn't provide comprehensive coverage of the
  full range of data types, distributions, etc. More thorough
  benchmarking could improve the costing estimates further.
* This initial patch only covers a subset of the operators, focusing
  on those that are most common and most CPU intensive. Specifically
  the following operators are covered by this patch. All others
  continue to use the previous ProcessingCost code:
  AggregationNode
  DataStreamSink (exchange sender)
  ExchangeNode
  HashJoinNode
  HdfsScanNode
  HdfsTableSink
  NestedLoopJoinNode
  SortNode
  UnionNode

Benchmark-based costing of the remaining operators will be covered by
a future patch.

Future patches will automate the collection and analysis of the benchmark
data and the computation of the cost coefficients to simplify maintenance
of the costing as performance changes over time.

Change-Id: Icf1edd48d4ae255b7b3b7f5b228800d7bac7d2ca
Reviewed-on: http://gerrit.cloudera.org:8080/21279
Reviewed-by: Riza Suminto 
Tested-by: Impala Public Jenkins 


> Improve ProcessingCost of ScanNode and NonGroupingAggregator
> 
>
> Key: IMPALA-12657
> URL: https://issues.apache.org/jira/browse/IMPALA-12657
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 4.3.0
>Reporter: Riza Suminto
>Assignee: David Rorke
>Priority: Major
> Fix For: Impala 4.4.0
>
> Attachments: profile_1f4d7a679a3e12d5_42231157.txt
>
>
> Several benchmark run measuring Impala scan performance indicates some 
> costing improvement opportunity around ScanNode and NonGroupingAggregator.
> [^profile_1f4d7a679a3e12d5_42231157.txt] shows an example of simple 
> count query.
> Key takeaway:
>  # There is a strong correlation between total materialized bytes (row-size * 
> cardinality) with total materialized tuple time per fragment. Row 
> materialization cost should be adjusted to be based on this row-sized instead 
> of equal cost per scan range.
>  # NonGroupingAggregator should have much lower cost that GroupingAggregator. 
> In example above, the cost of NonGroupingAggregator dominates the scan 
> fragment even though it only does simple counting instead of hash table 
> operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-13016) Fix ambiguous row_regex that check for no-existence

2024-04-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839194#comment-17839194
 ] 

ASF subversion and git services commented on IMPALA-13016:
--

Commit 9a41dfbdc7b59728a16c28c9bfed483a6ee9d3ae in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9a41dfbdc ]

IMPALA-13016: Fix ambiguous row_regex that check for no-existence

There are few row_regex patterns used in EE test files that are
ambiguous on whether a pattern does not exist in all parts of the
results/runtime profile or at least one row does not have that pattern.
These were caught by grepping the following pattern:

$ git grep -n "row_regex: (?\!"

This patch replaces them with either with !row_regex or VERIFY_IS_NOT_IN
comment.

Testing:
- Run and pass modified tests.

Change-Id: Ic81de34bf997dfaf1c199b1fe1b05346b55ff4da
Reviewed-on: http://gerrit.cloudera.org:8080/21333
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Fix ambiguous row_regex that check for no-existence
> ---
>
> Key: IMPALA-13016
> URL: https://issues.apache.org/jira/browse/IMPALA-13016
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Minor
>
> There are few row_regex pattern used in EE test files that is ambiguous on 
> whether a parttern not exist in all parts of results/runtime filter or at 
> least one row does not have that pattern:
> {code:java}
> $ git grep -n "row_regex: (?\!"
> testdata/workloads/functional-query/queries/QueryTest/acid-clear-statsaccurate.test:34:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/acid-truncate.test:47:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/clear-statsaccurate.test:28:row_regex:
>  (?!.*COLUMN_STATS_ACCURATE)
> testdata/workloads/functional-query/queries/QueryTest/iceberg-v2-directed-mode.test:14:row_regex:
>  (?!.*F03:JOIN BUILD.*) {code}
> They should be replaced either with !row_regex or VERIFY_IS_NOT_IN comment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12988) Calculate an unbounded version of CpuAsk

2024-04-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839196#comment-17839196
 ] 

ASF subversion and git services commented on IMPALA-12988:
--

Commit d437334e5304823836b9ceb5ffda9945dd7cb183 in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d437334e5 ]

IMPALA-12988: Calculate an unbounded version of CpuAsk

Planner calculates CpuAsk through a recursive call beginning at
Planner.computeBlockingAwareCores(), which is called after
Planner.computeEffectiveParallelism(). It does blocking operator
analysis over the selected degree of parallelism that was decided during
computeEffectiveParallelism() traversal. That selected degree of
parallelism, however, is already bounded by min and max parallelism
config, derived from PROCESSING_COST_MIN_THREADS and
MAX_FRAGMENT_INSTANCES_PER_NODE options accordingly.

This patch calculates an unbounded version of CpuAsk that is not bounded
by min and max parallelism config. It is purely based on the fragment's
ProcessingCost and query plan relationship constraint (for example, the
number of JOIN BUILDER fragments should equal the number of destination
JOIN fragments for partitioned join).

Frontend will receive both bounded and unbounded CpuAsk values from
TQueryExecRequest on each executor group set selection round. The
unbounded CpuAsk is then scaled down once using a nth root based
sublinear-function, controlled by the total cpu count of the smallest
executor group set and the bounded CpuAsk number. Another linear scaling
is then applied on both bounded and unbounded CpuAsk using
QUERY_CPU_COUNT_DIVISOR option. Frontend then compare the unbounded
CpuAsk after scaling against CpuMax to avoid assigning a query to a
small executor group set too soon. The last executor group set stays as
the "catch-all" executor group set.

After this patch, setting COMPUTE_PROCESSING_COST=True will show
following changes in query profile:
- The "max-parallelism" fields in the query plan will all be set to
  maximum parallelism based on ProcessingCost.
- The CpuAsk counter is changed to show the unbounded CpuAsk after
  scaling.
- A new counter CpuAskBounded shows the bounded CpuAsk after scaling. If
  QUERY_CPU_COUNT_DIVISOR=1 and PLANNER_CPU_ASK slot counting strategy
  is selected, this CpuAskBounded is also the minimum total admission
  slots given to the query.
- A new counter MaxParallelism shows the unbounded CpuAsk before
  scaling.
- The EffectiveParallelism counter remains unchanged,
  showing bounded CpuAsk before scaling.

Testing:
- Update and pass FE test TpcdsCpuCostPlannerTest and
  PlannerTest#testProcessingCost.
- Pass EE test tests/query_test/test_tpcds_queries.py
- Pass custom cluster test tests/custom_cluster/test_executor_groups.py

Change-Id: I5441e31088f90761062af35862be4ce09d116923
Reviewed-on: http://gerrit.cloudera.org:8080/21277
Reviewed-by: Kurt Deschler 
Reviewed-by: Abhishek Rawat 
Tested-by: Impala Public Jenkins 


> Calculate an unbounded version of CpuAsk
> 
>
> Key: IMPALA-12988
> URL: https://issues.apache.org/jira/browse/IMPALA-12988
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Reporter: Riza Suminto
>Assignee: Riza Suminto
>Priority: Major
>
> CpuAsk is calculated through recursive call beginning at 
> Planner.computeBlockingAwareCores(), which called after 
> Planner.computeEffectiveParallelism(). It does blocking operator analysis 
> over selected degree of parallelism that decided during 
> computeEffectiveParallelism() traversal. That selected degree of parallelism, 
> however, is already bounded by min and max parallelism config, derived from 
> PROCESSING_COST_MIN_THREADS and MAX_FRAGMENT_INSTANCES_PER_NODE options 
> accordingly.
> It is beneficial to have another version of CpuAsk that is not bounded by min 
> and max parallelism config. It should purely based on the fragment's 
> ProcessingCost and query plan relationship constraint (ie., num JOIN BUILDER 
> fragment should be equal as num JOIN fragment for partitioned join). During 
> executor group set selection, Frontend should use the unbounded CpuAsk number 
> to avoid assigning query to small executor group set prematurely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12938) test_no_inaccessible_objects failed in JDK11 build

2024-04-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839195#comment-17839195
 ] 

ASF subversion and git services commented on IMPALA-12938:
--

Commit 5e7d720257ba86c2d020d483c09673650a3f02d9 in impala's branch 
refs/heads/master from Michael Smith
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5e7d72025 ]

IMPALA-12938: add-opens for platform.cgroupv1

Adds '--add-opens=jdk.internal.platform.cgroupv1' for Java 11 with
ehcache, covering Impala daemons and frontend tests. Fixes
InaccessibleObjectException detected by test_banned_log_messages.py.

Change-Id: I312ae987c17c6f06e1ffe15e943b1865feef6b82
Reviewed-on: http://gerrit.cloudera.org:8080/21334
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> test_no_inaccessible_objects failed in JDK11 build
> --
>
> Key: IMPALA-12938
> URL: https://issues.apache.org/jira/browse/IMPALA-12938
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Zoltán Borók-Nagy
>Assignee: Michael Smith
>Priority: Major
>  Labels: broken-build
> Fix For: Impala 4.4.0
>
>
> h3. Error Message
> {noformat}
> AssertionError: 
> /data/jenkins/workspace/impala-asf-master-core-jdk11/repos/Impala/logs/custom_cluster_tests/impalad.impala-ec2-centos79-m6i-4xlarge-xldisk-197f.vpc.cloudera.com.jenkins.log.INFO.20240323-184351.16035
>  contains 'InaccessibleObjectException' assert 0 == 1{noformat}
> h3. Stacktrace
> {noformat}
> verifiers/test_banned_log_messages.py:40: in test_no_inaccessible_objects
> self.assert_message_absent('InaccessibleObjectException')
> verifiers/test_banned_log_messages.py:36: in assert_message_absent
> assert returncode == 1, "%s contains '%s'" % (log_file_path, message)
> E   AssertionError: 
> /data/jenkins/workspace/impala-asf-master-core-jdk11/repos/Impala/logs/custom_cluster_tests/impalad.impala-ec2-centos79-m6i-4xlarge-xldisk-197f.vpc.cloudera.com.jenkins.log.INFO.20240323-184351.16035
>  contains 'InaccessibleObjectException'
> E   assert 0 == 1{noformat}
> h3. Standard Output
> {noformat}
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1MemorySubSystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.memory accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpu accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpuacct accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.cpuset accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.blkio accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> java.lang.reflect.InaccessibleObjectException: Unable to make field private 
> jdk.internal.platform.cgroupv1.CgroupV1SubsystemController 
> jdk.internal.platform.cgroupv1.CgroupV1Subsystem.pids accessible: module 
> java.base does not "opens jdk.internal.platform.cgroupv1" to unnamed module 
> @1a2e2935
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8998) Admission control accounting for mt_dop

2024-04-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838947#comment-17838947
 ] 

ASF subversion and git services commented on IMPALA-8998:
-

Commit 6abfdbc56c3d0ec3ac201dd4b8c2c35656d24eaf in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=6abfdbc56 ]

IMPALA-12980: Translate CpuAsk into admission control slots

Impala has a concept of "admission control slots" - the amount of
parallelism that should be allowed on an Impala daemon. This defaults to
the number of processors per executor and can be overridden with
-–admission_control_slots flag.

Admission control slot accounting is described in IMPALA-8998. It
computes 'slots_to_use' for each backend based on the maximum number of
instances of any fragment on that backend. This can lead to slot
underestimation and query overadmission. For example, assume an executor
node with 48 CPU cores and configured with -–admission_control_slots=48.
It is assigned 4 non-blocking query fragments, each has 12 instances
scheduled in this executor. IMPALA-8998 algorithm will request the max
instance (12) slots rather than the sum of all non-blocking fragment
instances (48). With the 36 remaining slots free, the executor can still
admit another fragment from a different query but will potentially have
CPU contention with the one that is currently running.

When COMPUTE_PROCESSING_COST is enabled, Planner will generate a CpuAsk
number that represents the cpu requirement of that query over a
particular executor group set. This number is an estimation of the
largest number of query fragment instances that can run in parallel
without waiting, given by the blocking operator analysis. Therefore, the
fragment trace that sums into that CpuAsk number can be translated into
'slots_to_use' as well, which will be a closer resemblance of maximum
parallel execution of fragment instances.

This patch adds a new query option called SLOT_COUNT_STRATEGY to control
which admission control slot accounting to use. There are two possible
values:
- LARGEST_FRAGMENT, which is the original algorithm from IMPALA-8998.
  This is still the default value for the SLOT_COUNT_STRATEGY option.
- PLANNER_CPU_ASK, which will follow the fragment trace that contributes
  towards CpuAsk number. This strategy will schedule more or equal
  admission control slots than the LARGEST_FRAGMENT strategy.

To do the PLANNER_CPU_ASK strategy, the Planner will mark fragments that
contribute to CpuAsk as dominant fragments. It also passes
max_slot_per_executor information that it knows about the executor group
set to the scheduler.

AvgAdmissionSlotsPerExecutor counter is added to describe what Planner
thinks the average 'slots_to_use' per backend will be, which follows
this formula:

  AvgAdmissionSlotsPerExecutor = ceil(CpuAsk / num_executors)

Actual 'slots_to_use' in each backend may differ than
AvgAdmissionSlotsPerExecutor, depending on what is scheduled on that
backend. 'slots_to_use' will be shown as 'AdmissionSlots' counter under
each executor profile node.

Testing:
- Update test_executors.py with AvgAdmissionSlotsPerExecutor assertion.
- Pass test_tpcds_queries.py::TestTpcdsQueryWithProcessingCost.
- Add EE test test_processing_cost.py.
- Add FE test PlannerTest#testProcessingCostPlanAdmissionSlots.

Change-Id: I338ca96555bfe8d07afce0320b3688a0861663f2
Reviewed-on: http://gerrit.cloudera.org:8080/21257
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Admission control accounting for mt_dop
> ---
>
> Key: IMPALA-8998
> URL: https://issues.apache.org/jira/browse/IMPALA-8998
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Tim Armstrong
>Assignee: Tim Armstrong
>Priority: Major
> Fix For: Impala 3.4.0
>
>
> We should account for the degree of parallelism that the query runs with on a 
> backend to avoid overadmitting too many parallel queries. 
> We could probably simply count the effective degree of parallelism (max # 
> instances of a fragment on that backend) toward the number of slots in 
> admission control (although slots are not enabled for the default group yet - 
> see IMPALA-8757).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



  1   2   3   4   5   6   7   8   9   10   >