codeant-ai-for-open-source[bot] commented on code in PR #38591:
URL: https://github.com/apache/superset/pull/38591#discussion_r3012725820
##########
superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx:
##########
@@ -56,115 +57,260 @@ jest.mock('@superset-ui/core', () => ({
}));
const mockedIsFeatureEnabled = isFeatureEnabled as jest.Mock;
-// eslint-disable-next-line no-restricted-globals -- TODO: Migrate from
describe blocks
-describe('Email Report Modal', () => {
- beforeEach(() => {
- mockedIsFeatureEnabled.mockImplementation(
- featureFlag => featureFlag === FeatureFlag.AlertReports,
- );
- render(<ReportModal {...defaultProps} />, { useRedux: true });
+
+beforeEach(() => {
+ mockedIsFeatureEnabled.mockImplementation(
+ featureFlag => featureFlag === FeatureFlag.AlertReports,
+ );
+});
+
+test('inputs respond correctly', () => {
+ render(<ReportModal {...defaultProps} />, { useRedux: true });
+ // ----- Report name textbox
+ const reportNameTextbox = screen.getByTestId('report-name-test');
+ expect(reportNameTextbox).toHaveDisplayValue('Weekly Report');
+ userEvent.clear(reportNameTextbox);
+ userEvent.type(reportNameTextbox, 'Report name text test');
+ expect(reportNameTextbox).toHaveDisplayValue('Report name text test');
+
+ // ----- Report description textbox
+ const reportDescriptionTextbox = screen.getByTestId(
+ 'report-description-test',
+ );
+ expect(reportDescriptionTextbox).toHaveDisplayValue('');
+ userEvent.type(reportDescriptionTextbox, 'Report description text test');
Review Comment:
**Suggestion:** `userEvent.clear` and `userEvent.type` are asynchronous in
this test setup. Without awaiting them, assertions can run before input state
updates, causing flaky failures where the typed value is not yet present.
[possible bug]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ⚠️ `ReportModal` input-behavior test can intermittently fail on CI.
- ⚠️ Flaky test may block changes to alerts/reports frontend.
```
</details>
```suggestion
test('inputs respond correctly', async () => {
render(<ReportModal {...defaultProps} />, { useRedux: true });
// ----- Report name textbox
const reportNameTextbox = screen.getByTestId('report-name-test');
expect(reportNameTextbox).toHaveDisplayValue('Weekly Report');
await userEvent.clear(reportNameTextbox);
await userEvent.type(reportNameTextbox, 'Report name text test');
expect(reportNameTextbox).toHaveDisplayValue('Report name text test');
// ----- Report description textbox
const reportDescriptionTextbox = screen.getByTestId(
'report-description-test',
);
expect(reportDescriptionTextbox).toHaveDisplayValue('');
await userEvent.type(reportDescriptionTextbox, 'Report description text
test');
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Run the frontend test suite including `ReportModal.test.tsx` (file at
`superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx`).
2. The test `inputs respond correctly` at lines 67–89 renders `<ReportModal
/>` and then
calls `userEvent.clear(reportNameTextbox)` and
`userEvent.type(reportNameTextbox, 'Report
name text test')` without awaiting them (lines 72–73).
3. `userEvent` is re-exported from `@testing-library/user-event` in
`spec/helpers/testing-library.tsx:45–47,137` and other tests in this repo
treat
`userEvent.clear`/`userEvent.type` as async by awaiting them (e.g.,
`DatasetList.listview.test.tsx:540–541,1272–1273`), confirming the async
contract.
4. On a slower CI or JS environment, the Promises returned by
`userEvent.clear`/`userEvent.type` may not have resolved before the
immediate assertions
at lines 74 and 82, causing intermittent failures where the expected values
`'Report name
text test'` or `'Report description text test'` are not yet present.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx
**Line:** 67:81
**Comment:**
*Possible Bug: `userEvent.clear` and `userEvent.type` are asynchronous
in this test setup. Without awaiting them, assertions can run before input
state updates, causing flaky failures where the typed value is not yet present.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=a61a4682843bd2783ffd6ddc12330cf5d4e7e54fce43475c0e07327c52a31b09&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=a61a4682843bd2783ffd6ddc12330cf5d4e7e54fce43475c0e07327c52a31b09&reaction=dislike'>👎</a>
##########
tests/unit_tests/reports/notifications/slack_tests.py:
##########
@@ -377,3 +377,57 @@ def test_send_slack_no_feature_flag(
```
""",
)
+
+
+@patch("superset.reports.notifications.slackv2.g")
+@patch("superset.reports.notifications.slackv2.get_slack_client")
+def test_slackv2_send_without_channels_raises(
+ slack_client_mock: MagicMock,
+ flask_global_mock: MagicMock,
+ mock_header_data,
+) -> None:
+ from superset.reports.models import ReportRecipients, ReportRecipientType
+ from superset.reports.notifications.base import NotificationContent
+ from superset.reports.notifications.exceptions import
NotificationParamException
+
+ flask_global_mock.logs_context = {}
+ content = NotificationContent(name="test", header_data=mock_header_data)
+ notification = SlackV2Notification(
+ recipient=ReportRecipients(
+ type=ReportRecipientType.SLACKV2,
+ recipient_config_json='{"target": ""}',
+ ),
+ content=content,
+ )
+ with pytest.raises(NotificationParamException, match="No recipients"):
+ notification.send()
+
+
+@patch("superset.reports.notifications.slackv2.g")
+@patch("superset.reports.notifications.slackv2.get_slack_client")
+def test_slack_mixin_get_body_truncates_large_table(
+ slack_client_mock: MagicMock,
+ flask_global_mock: MagicMock,
+ mock_header_data,
+) -> None:
+ from superset.reports.models import ReportRecipients, ReportRecipientType
+ from superset.reports.notifications.base import NotificationContent
+
+ flask_global_mock.logs_context = {}
+ # Create a large DataFrame that exceeds the 4000-char message limit
+ large_df = pd.DataFrame({"col_" + str(i): range(100) for i in range(10)})
+ content = NotificationContent(
+ name="test",
+ header_data=mock_header_data,
+ embedded_data=large_df,
+ description="desc",
+ )
+ notification = SlackV2Notification(
+ recipient=ReportRecipients(
+ type=ReportRecipientType.SLACKV2,
+ recipient_config_json='{"target": "some_channel"}',
+ ),
+ content=content,
+ )
+ body = notification._get_body(content=content)
Review Comment:
**Suggestion:** The test is intended to verify truncation for oversized
Slack messages, but it only checks for a marker string. This can pass even when
the body still exceeds Slack's 4000-character limit, allowing a real regression
to slip through and fail at runtime when posting to Slack. Assert the final
body length is within the limit in addition to checking the truncation marker.
[logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Large SlackV2 alerts may exceed Slack 4000-character limit.
- ⚠️ Slack API calls can fail for oversized notification bodies.
- ⚠️ Truncation logic bug in SlackMixin._get_body goes undetected.
- ⚠️ Report execution via execute.py _send can surface Slack errors.
- ⚠️ Unit test test_slack_mixin_get_body_truncates_large_table misses
regression.
```
</details>
```suggestion
from superset.reports.notifications.slack_mixin import
MAXIMUM_MESSAGE_SIZE
body = notification._get_body(content=content)
assert len(body) <= MAXIMUM_MESSAGE_SIZE
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Define a report schedule with a SlackV2 recipient (`ReportRecipients.type
==
ReportRecipientType.SLACKV2`) so that `ReportScheduleCommand.send` in
`superset/commands/report/execute.py:65-72` will eventually send a Slack
notification.
2. Configure the report so its query returns a large table that becomes
`embedded_data` in
the notification content (see `self._get_notification_content()` feeding into
`notification_content` passed to `_send` in
`superset/commands/report/execute.py:71-72`).
3. When the schedule executes, `_send` in
`superset/commands/report/execute.py:15-38`
calls `create_notification(recipient, notification_content)` from
`superset/reports/notifications/__init__.py:25-33`, which instantiates
`SlackV2Notification` (`superset/reports/notifications/slackv2.py:52-58`)
and then calls
`notification.send()`.
4. Inside `SlackV2Notification.send`
(`superset/reports/notifications/slackv2.py:80-88`),
`body = self._get_body(content=self._content)` invokes `SlackMixin._get_body`
(`superset/reports/notifications/slack_mixin.py:63-124`), which currently
uses
`df.to_markdown()` instead of the truncated `truncated_df` around lines
95-112; this can
produce a `body` longer than `MAXIMUM_MESSAGE_SIZE` (constant defined at
`slack_mixin.py:23`) even though the string `"(table was truncated)"` is
present.
5. The unit test `test_slack_mixin_get_body_truncates_large_table` in
`tests/unit_tests/reports/notifications/slack_tests.py:29-54` only asserts
that `"(table
was truncated)"` appears in `body` and never checks `len(body)` against
`MAXIMUM_MESSAGE_SIZE`, so this regression (oversized Slack body) would not
be caught by
tests even though it violates the documented 4k-character limit.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** tests/unit_tests/reports/notifications/slack_tests.py
**Line:** 432:432
**Comment:**
*Logic Error: The test is intended to verify truncation for oversized
Slack messages, but it only checks for a marker string. This can pass even when
the body still exceeds Slack's 4000-character limit, allowing a real regression
to slip through and fail at runtime when posting to Slack. Assert the final
body length is within the limit in addition to checking the truncation marker.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=f1b164a7ebe8f80124db0f7df1e411ada1b457faad7908a70d8c366ec915a2be&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=f1b164a7ebe8f80124db0f7df1e411ada1b457faad7908a70d8c366ec915a2be&reaction=dislike'>👎</a>
##########
superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx:
##########
@@ -56,115 +57,260 @@ jest.mock('@superset-ui/core', () => ({
}));
const mockedIsFeatureEnabled = isFeatureEnabled as jest.Mock;
-// eslint-disable-next-line no-restricted-globals -- TODO: Migrate from
describe blocks
-describe('Email Report Modal', () => {
- beforeEach(() => {
- mockedIsFeatureEnabled.mockImplementation(
- featureFlag => featureFlag === FeatureFlag.AlertReports,
- );
- render(<ReportModal {...defaultProps} />, { useRedux: true });
+
+beforeEach(() => {
+ mockedIsFeatureEnabled.mockImplementation(
+ featureFlag => featureFlag === FeatureFlag.AlertReports,
+ );
+});
+
+test('inputs respond correctly', () => {
+ render(<ReportModal {...defaultProps} />, { useRedux: true });
+ // ----- Report name textbox
+ const reportNameTextbox = screen.getByTestId('report-name-test');
+ expect(reportNameTextbox).toHaveDisplayValue('Weekly Report');
+ userEvent.clear(reportNameTextbox);
+ userEvent.type(reportNameTextbox, 'Report name text test');
+ expect(reportNameTextbox).toHaveDisplayValue('Report name text test');
+
+ // ----- Report description textbox
+ const reportDescriptionTextbox = screen.getByTestId(
+ 'report-description-test',
+ );
+ expect(reportDescriptionTextbox).toHaveDisplayValue('');
+ userEvent.type(reportDescriptionTextbox, 'Report description text test');
+ expect(reportDescriptionTextbox).toHaveDisplayValue(
+ 'Report description text test',
+ );
+
+ // ----- Crontab
+ const crontabInputs = screen.getAllByRole('combobox');
+ expect(crontabInputs).toHaveLength(5);
+});
+
+test('does not allow user to create a report without a name', () => {
+ render(<ReportModal {...defaultProps} />, { useRedux: true });
+ const reportNameTextbox = screen.getByTestId('report-name-test');
+ const addButton = screen.getByRole('button', { name: /add/i });
+
+ expect(reportNameTextbox).toHaveDisplayValue('Weekly Report');
+ expect(addButton).toBeEnabled();
+
+ userEvent.clear(reportNameTextbox);
Review Comment:
**Suggestion:** The clear action is asynchronous, so checking button
disabled state immediately can race with React state updates and intermittently
assert the pre-clear state. [possible bug]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ⚠️ Name-validation test for `ReportModal` can be flaky in CI.
- ⚠️ Flaky test may slow development of report scheduling UI.
```
</details>
```suggestion
test('does not allow user to create a report without a name', async () => {
render(<ReportModal {...defaultProps} />, { useRedux: true });
const reportNameTextbox = screen.getByTestId('report-name-test');
const addButton = screen.getByRole('button', { name: /add/i });
expect(reportNameTextbox).toHaveDisplayValue('Weekly Report');
expect(addButton).toBeEnabled();
await userEvent.clear(reportNameTextbox);
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Run `ReportModal.test.tsx` (path
`superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx`)
as part of the
frontend Jest suite.
2. The test `does not allow user to create a report without a name` at lines
91–103 calls
`userEvent.clear(reportNameTextbox)` without awaiting it (line 99), even
though
`userEvent` is imported from `spec/helpers/testing-library` which re-exports
`@testing-library/user-event` (see
`spec/helpers/testing-library.tsx:45,137`).
3. The subsequent expectations on lines 101–102 assert that the textbox is
empty and the
Add button is disabled immediately after the clear call, while other tests
in the repo
handling text inputs usually `await userEvent.clear` and `await
userEvent.type` (e.g.,
`DashboardList.behavior.test.tsx:309–310`,
`DatasetList.behavior.test.tsx:331–332`),
indicating that these operations are treated as asynchronous.
4. Under slower or more concurrent environments, the `clear` Promise may not
complete
before the assertions run, leading to intermittent failures where the
textbox still
contains `'Weekly Report'` and the Add button remains enabled, making this
validation test
flaky.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/features/reports/ReportModal/ReportModal.test.tsx
**Line:** 91:99
**Comment:**
*Possible Bug: The clear action is asynchronous, so checking button
disabled state immediately can race with React state updates and intermittently
assert the pre-clear state.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=89800f1b91331855468fa0c5547d2afaabf139651f3181f86f74933a10f4b39b&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=89800f1b91331855468fa0c5547d2afaabf139651f3181f86f74933a10f4b39b&reaction=dislike'>👎</a>
##########
superset-frontend/src/pages/AlertReportList/AlertReportList.test.tsx:
##########
@@ -16,271 +16,469 @@
* specific language governing permissions and limitations
* under the License.
*/
+import type React from 'react';
import fetchMock from 'fetch-mock';
-import configureStore from 'redux-mock-store';
-import thunk from 'redux-thunk';
import {
render,
screen,
fireEvent,
waitFor,
+ createStore,
} from 'spec/helpers/testing-library';
+import { Provider } from 'react-redux';
import { MemoryRouter } from 'react-router-dom';
import { QueryParamProvider } from 'use-query-params';
import { ReactRouter5Adapter } from 'use-query-params/adapters/react-router-5';
-import React from 'react';
import AlertListComponent from 'src/pages/AlertReportList';
-// Cast to accept partial mock props in tests
+jest.setTimeout(30000);
+
const AlertList = AlertListComponent as unknown as React.FC<
Record<string, any>
>;
-const mockStore = configureStore([thunk]);
-const store = mockStore({});
-
-const alertsEndpoint = 'glob:*/api/v1/report/?*';
-const alertEndpoint = 'glob:*/api/v1/report/*';
-const alertsInfoEndpoint = 'glob:*/api/v1/report/_info*';
-const alertsCreatedByEndpoint = 'glob:*/api/v1/report/related/created_by*';
-
-const mockalerts = Array.from({ length: 3 }, (_, i) => ({
- active: true,
- changed_by: {
- first_name: `user ${i}`,
- id: i,
+// -- Mock data (IDs start at 1 to avoid the `if (data?.id)` falsy guard) --
+
+const mockAlerts = [
+ {
+ id: 1,
+ name: 'Weekly Sales Alert',
+ active: true,
+ last_state: 'Success',
+ type: 'Alert',
+ owners: [{ id: 1, first_name: 'Admin', last_name: 'User' }],
+ recipients: [{ id: 1, type: 'Email' }],
+ changed_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ changed_on_delta_humanized: '1 day ago',
+ created_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ created_on: new Date().toISOString(),
+ last_eval_dttm: Date.now(),
+ crontab: '0 9 * * 1',
+ crontab_humanized: 'Every Monday at 09:00',
+ timezone: 'UTC',
},
- changed_on_delta_humanized: `${i} day(s) ago`,
- created_by: {
- first_name: `user ${i}`,
- id: i,
+ {
+ id: 2,
+ name: 'Daily Error Alert',
+ active: true,
+ last_state: 'Error',
+ type: 'Alert',
+ owners: [{ id: 2, first_name: 'Data', last_name: 'Analyst' }],
+ recipients: [{ id: 2, type: 'Slack' }],
+ changed_by: { id: 2, first_name: 'Data', last_name: 'Analyst' },
+ changed_on_delta_humanized: '2 days ago',
+ created_by: { id: 2, first_name: 'Data', last_name: 'Analyst' },
+ created_on: new Date().toISOString(),
+ last_eval_dttm: Date.now(),
+ crontab: '0 8 * * *',
+ crontab_humanized: 'Every day at 08:00',
+ timezone: 'US/Pacific',
},
- created_on: new Date().toISOString,
- id: i,
- last_eval_dttm: Date.now(),
- last_state: 'ok',
- name: `alert ${i} `,
- owners: [{ id: 1 }],
- recipients: [
- {
- id: `${i}`,
- type: 'email',
- },
- ],
- type: 'alert',
-}));
+ {
+ id: 3,
+ name: 'Monthly Revenue Alert',
+ active: false,
+ last_state: 'Working',
+ type: 'Alert',
+ owners: [{ id: 1, first_name: 'Admin', last_name: 'User' }],
+ recipients: [{ id: 3, type: 'Email' }],
+ changed_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ changed_on_delta_humanized: '5 days ago',
+ created_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ created_on: new Date().toISOString(),
+ last_eval_dttm: Date.now(),
+ crontab: '0 0 1 * *',
+ crontab_humanized: 'First day of the month',
+ timezone: 'UTC',
+ },
+];
+
+const mockReports = [
+ {
+ id: 10,
+ name: 'Weekly Dashboard Report',
+ active: true,
+ last_state: 'Success',
+ type: 'Report',
+ owners: [{ id: 1, first_name: 'Admin', last_name: 'User' }],
+ recipients: [{ id: 10, type: 'Email' }],
+ changed_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ changed_on_delta_humanized: '1 day ago',
+ created_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ created_on: new Date().toISOString(),
+ last_eval_dttm: Date.now(),
+ crontab: '0 9 * * 1',
+ crontab_humanized: 'Every Monday at 09:00',
+ timezone: 'UTC',
+ },
+ {
+ id: 11,
+ name: 'Monthly KPI Report',
+ active: false,
+ last_state: 'Not triggered',
+ type: 'Report',
+ owners: [{ id: 1, first_name: 'Admin', last_name: 'User' }],
+ recipients: [{ id: 11, type: 'Slack' }],
+ changed_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ changed_on_delta_humanized: '3 days ago',
+ created_by: { id: 1, first_name: 'Admin', last_name: 'User' },
+ created_on: new Date().toISOString(),
+ last_eval_dttm: Date.now(),
+ crontab: '0 0 1 * *',
+ crontab_humanized: 'First day of the month',
+ timezone: 'UTC',
+ },
+];
const mockUser = {
userId: 1,
- firstName: 'user 1',
- lastName: 'lastname',
+ firstName: 'Admin',
+ lastName: 'User',
+};
+
+// -- API endpoints (named for cleanup) --
+
+const ENDPOINTS = {
+ LIST: 'glob:*/api/v1/report/?*',
+ INFO: 'glob:*/api/v1/report/_info*',
+ SINGLE: 'glob:*/api/v1/report/*',
Review Comment:
**Suggestion:** The `SINGLE` route matcher is too broad and can also match
collection-level URLs (like bulk delete on `/api/v1/report/?...`). That causes
the wrong mock (`delete-alert`) to handle bulk-delete requests, returning `{}`
instead of `{ message: ... }`, which can hide real failures or produce
incorrect behavior in tests. Narrow this matcher to numeric ID paths only.
[logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ⚠️ Bulk delete alert tests may hit single-resource mock.
- ⚠️ Bulk delete response `message` field becomes undefined in tests.
- ⚠️ Future bulk delete assertions may silently exercise wrong endpoint.
```
</details>
```suggestion
SINGLE: /\/api\/v1\/report\/\d+$/,
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Open `superset-frontend/src/pages/AlertReportList/index.tsx` and note
bulk delete
wiring: `deleteAlerts` is defined with `method: 'DELETE'` and `endpoint:
'/api/v1/report/'` using `requestType: 'rison'` (lines 3–7 in the `Showing
lines 90 to
239` snippet), and `handleBulkAlertDelete` calls
`deleteAlerts(alertsToDelete.map(({ id })
=> id))` then uses the returned `{ message }` (lines 115–121 of that same
snippet).
2. In the same file, see `ConfirmStatusChange` at `index.tsx` lines 19–27 in
the `Showing
lines 600 to 679` snippet: its `onConfirm={handleBulkAlertDelete}` is wired
into the
ListView bulk action "Delete", so confirming bulk delete in the UI issues a
DELETE to
`/api/v1/report/?q=...` (collection endpoint with `?`), matching the pattern
used in the
dashboard bulk-delete tests (`DashboardList.cardview.test.tsx` shows a
DELETE to
`/api/v1/dashboard/?...` at "Showing lines 316 to 355", lines 10–23).
3. Open
`superset-frontend/src/pages/AlertReportList/AlertReportList.test.tsx` around
the
ENDPOINTS definition (`Showing lines 130 to 249`): `LIST` is
`'glob:*/api/v1/report/?*'`
and `SINGLE` is `'glob:*/api/v1/report/*'` (lines 12–18, actual lines
141–147). In
`setupMocks` (lines 38–82, actual ~167–211) both DELETE mocks are registered:
`fetchMock.delete(ENDPOINTS.SINGLE, {}, { name: 'delete-alert' });` followed
by
`fetchMock.delete(ENDPOINTS.LIST, { message: 'Deleted' }, { name:
'delete-bulk' });`.
4. When a bulk delete is triggered (via the ConfirmStatusChange flow in step
2) under this
test setup, the outgoing DELETE to `/api/v1/report/?q=...` matches BOTH
globs: the broad
single-resource matcher `'glob:*/api/v1/report/*'` and the collection matcher
`'glob:*/api/v1/report/?*'`. Because the single-resource route is registered
first for
DELETE, fetch-mock will resolve the request against the `delete-alert`
handler instead of
`delete-bulk`, returning `{}` instead of `{ message: 'Deleted' }`. Any test
that asserts
the bulk route was called (analogous to the dashboard bulk delete assertion
in
`DashboardList.cardview.test.tsx` lines 20–25) or relies on the `message`
field from the
bulk response would observe incorrect behavior: the wrong mock is counted
and `message` is
`undefined`.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:**
superset-frontend/src/pages/AlertReportList/AlertReportList.test.tsx
**Line:** 144:144
**Comment:**
*Logic Error: The `SINGLE` route matcher is too broad and can also
match collection-level URLs (like bulk delete on `/api/v1/report/?...`). That
causes the wrong mock (`delete-alert`) to handle bulk-delete requests,
returning `{}` instead of `{ message: ... }`, which can hide real failures or
produce incorrect behavior in tests. Narrow this matcher to numeric ID paths
only.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=43df3fb3853730a0908763135ca9ff21b52ca6e009d7a72a621610ea3c9f62f6&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=43df3fb3853730a0908763135ca9ff21b52ca6e009d7a72a621610ea3c9f62f6&reaction=dislike'>👎</a>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]