codeant-ai-for-open-source[bot] commented on code in PR #38591:
URL: https://github.com/apache/superset/pull/38591#discussion_r2922527856
##########
superset-frontend/src/features/alerts/AlertReportModal.test.tsx:
##########
@@ -867,143 +921,852 @@ test('filter reappears in dropdown after clearing with
X icon', async () => {
userEvent.click(screen.getByTestId('contents-panel'));
await screen.findByText(/test dashboard/i);
- const filterDropdown = screen.getByRole('combobox', {
- name: /select filter/i,
- });
- expect(filterDropdown).toBeInTheDocument();
+ const tabSelector = document.querySelector('.ant-select-disabled');
+ expect(tabSelector).toBeInTheDocument();
Review Comment:
**Suggestion:** This assertion uses a broad `.ant-select-disabled` selector
that can match any disabled select, so the test may pass even when the tab
selector is not disabled. Scope the check to the tab selector placeholder to
validate the intended behavior. [logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ⚠️ Tab-selector regression may go undetected in tests.
- ⚠️ False-positive passing weakens dashboard-tabs coverage quality.
```
</details>
```suggestion
const tabPlaceholder = screen.getByText(/select a tab/i);
expect(tabPlaceholder.closest('.ant-select')).toHaveClass('ant-select-disabled');
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Run test `dashboard with no tabs disables tab selector` at
`AlertReportModal.test.tsx:913`.
2. This test asserts disabled state using global selector
`.ant-select-disabled` at
`:924-925`, not the tab control specifically.
3. In the same file, `dashboard with no tabs and no filters hides filter add
link`
(`:928+`) proves multiple disabled selects can coexist (`:947-952`).
4. Therefore current assertion can pass by matching a different disabled
select, allowing
tab-selector regressions to slip through.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/features/alerts/AlertReportModal.test.tsx
**Line:** 924:925
**Comment:**
*Logic Error: This assertion uses a broad `.ant-select-disabled`
selector that can match any disabled select, so the test may pass even when the
tab selector is not disabled. Scope the check to the tab selector placeholder
to validate the intended behavior.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=0a11a27c4a47016fd44023d23fdcc55bbf617a3da8013f2baa570849f06ecd60&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=0a11a27c4a47016fd44023d23fdcc55bbf617a3da8013f2baa570849f06ecd60&reaction=dislike'>👎</a>
##########
tests/unit_tests/commands/report/execute_test.py:
##########
@@ -343,6 +360,123 @@ def test_get_dashboard_urls_with_exporting_dashboard_only(
assert expected_url == result[0]
+@patch("superset.commands.report.execute.CreateDashboardPermalinkCommand")
+@with_feature_flags(ALERT_REPORT_TABS=True)
+def test_get_dashboard_urls_with_filters_and_tabs(
+ mock_permalink_cls,
+ mocker: MockerFixture,
+ app,
+) -> None:
+ mock_report_schedule: ReportSchedule = mocker.Mock(spec=ReportSchedule)
+ mock_report_schedule.chart = False
+ mock_report_schedule.chart_id = None
+ mock_report_schedule.dashboard_id = 123
+ mock_report_schedule.type = "report_type"
+ mock_report_schedule.report_format = "report_format"
+ mock_report_schedule.owners = [1, 2]
+ mock_report_schedule.recipients = []
+ native_filter_rison = "(NATIVE_FILTER-1:(filterType:filter_select))"
+ mock_report_schedule.extra = {
+ "dashboard": {
+ "anchor": json.dumps(["TAB-1", "TAB-2"]),
+ "dataMask": {"NATIVE_FILTER-1": {"filterState": {"value":
["Sales"]}}},
+ "activeTabs": ["TAB-1", "TAB-2"],
+ "urlParams": None,
+ "nativeFilters": [ # type: ignore[typeddict-unknown-key]
+ {
+ "nativeFilterId": "NATIVE_FILTER-1",
+ "filterType": "filter_select",
+ "columnName": "department",
+ "filterValues": ["Sales"],
+ }
+ ],
+ }
+ }
+ mock_report_schedule.get_native_filters_params.return_value =
native_filter_rison # type: ignore[attr-defined]
+ mock_permalink_cls.return_value.run.side_effect = ["key1", "key2"]
+
+ class_instance: BaseReportState = BaseReportState(
+ mock_report_schedule, "January 1, 2021", "execution_id_example"
+ )
+ class_instance._report_schedule = mock_report_schedule
+
+ result: list[str] = class_instance.get_dashboard_urls()
+
+ import urllib.parse
+
+ base_url = app.config.get("WEBDRIVER_BASEURL", "http://0.0.0.0:8080/")
+ assert result == [
+ urllib.parse.urljoin(base_url, "superset/dashboard/p/key1/"),
+ urllib.parse.urljoin(base_url, "superset/dashboard/p/key2/"),
+ ]
+ mock_report_schedule.get_native_filters_params.assert_called_once() #
type: ignore[attr-defined]
+ assert mock_permalink_cls.call_count == 2
+ for call in mock_permalink_cls.call_args_list:
+ state = call.kwargs["state"]
+ assert state["urlParams"] == [["native_filters", native_filter_rison]]
+ assert mock_permalink_cls.call_args_list[0].kwargs["state"]["anchor"] ==
"TAB-1"
+ assert mock_permalink_cls.call_args_list[1].kwargs["state"]["anchor"] ==
"TAB-2"
+
+
+@patch("superset.commands.report.execute.CreateDashboardPermalinkCommand")
+@with_feature_flags(ALERT_REPORT_TABS=True)
+def test_get_dashboard_urls_with_filters_no_tabs(
+ mock_permalink_cls,
+ mocker: MockerFixture,
+ app,
+) -> None:
+ mock_report_schedule: ReportSchedule = mocker.Mock(spec=ReportSchedule)
+ mock_report_schedule.chart = False
+ mock_report_schedule.chart_id = None
+ mock_report_schedule.dashboard_id = 123
+ mock_report_schedule.type = "report_type"
+ mock_report_schedule.report_format = "report_format"
+ mock_report_schedule.owners = [1, 2]
+ mock_report_schedule.recipients = []
+ native_filter_rison = "(NATIVE_FILTER-1:(filterType:filter_select))"
+ mock_report_schedule.extra = {
+ "dashboard": {
+ "anchor": "",
+ "dataMask": {"NATIVE_FILTER-1": {"filterState": {"value":
["Sales"]}}},
+ "activeTabs": None,
+ "urlParams": None,
+ "nativeFilters": [ # type: ignore[typeddict-unknown-key]
+ {
+ "nativeFilterId": "NATIVE_FILTER-1",
+ "filterType": "filter_select",
+ "columnName": "department",
+ "filterValues": ["Sales"],
+ }
+ ],
+ }
+ }
+ mock_report_schedule.get_native_filters_params.return_value =
native_filter_rison # type: ignore[attr-defined]
+ mock_permalink_cls.return_value.run.return_value = "key1"
+
+ class_instance: BaseReportState = BaseReportState(
+ mock_report_schedule, "January 1, 2021", "execution_id_example"
+ )
+ class_instance._report_schedule = mock_report_schedule
+
+ result: list[str] = class_instance.get_dashboard_urls()
+
+ import urllib.parse
+
+ base_url = app.config.get("WEBDRIVER_BASEURL", "http://0.0.0.0:8080/")
+ assert result == [
+ urllib.parse.urljoin(base_url, "superset/dashboard/p/key1/"),
+ ]
+ mock_report_schedule.get_native_filters_params.assert_called_once() #
type: ignore[attr-defined]
+ assert mock_permalink_cls.call_count == 1
+ state = mock_permalink_cls.call_args_list[0].kwargs["state"]
+ # BUG: {urlParams: [...], **dashboard_state} lets
dashboard_state["urlParams"]
+ # overwrite the native_filters param. The tabs path (_get_tabs_urls) does
not
+ # have this issue because it builds the dict without **dashboard_state.
+ assert (
+ state["urlParams"] is None
+ ) # should be [["native_filters", native_filter_rison]]
Review Comment:
**Suggestion:** This test is asserting the known broken behavior
(`urlParams` becomes `None`) instead of the expected behavior (native filter
params preserved). As written, it will pass even when the bug exists and will
fail once the implementation is fixed, so it locks in a regression rather than
preventing one. [logic error]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Native dashboard filters dropped in single-tab report permalinks.
- ⚠️ CI test passes while regression remains unguarded.
- ⚠️ Future bugfixes fail due to inverted assertion.
```
</details>
```suggestion
# Ensure native filter params are preserved for single-tab permalink
state.
assert state["urlParams"] == [["native_filters", native_filter_rison]]
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Trigger normal report execution flow through Celery task
`reports.execute` at
`superset/tasks/scheduler.py:117`, which calls
`AsyncExecuteReportScheduleCommand(...).run()` at
`superset/tasks/scheduler.py:130`.
2. In execution, dashboard reports reach screenshot generation
`BaseReportState._get_screenshots()`
(`superset/commands/report/execute.py:349`), which
calls `get_dashboard_urls()` (`superset/commands/report/execute.py:381`).
3. For dashboard state with no tab anchors and with native filters,
`get_dashboard_urls()`
builds state as `{"urlParams": [...], **dashboard_state}` at
`superset/commands/report/execute.py:281-288`;
`dashboard_state["urlParams"]` overwrites
native filters, so resulting permalink state has `urlParams=None`.
4. Run unit test `test_get_dashboard_urls_with_filters_no_tabs` in
`tests/unit_tests/commands/report/execute_test.py:423-478`; it currently
asserts
`state["urlParams"] is None` at lines `472-477`, so CI passes while
preserving broken
behavior. If production code is fixed, this test fails, proving assertion is
inverted.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** tests/unit_tests/commands/report/execute_test.py
**Line:** 472:477
**Comment:**
*Logic Error: This test is asserting the known broken behavior
(`urlParams` becomes `None`) instead of the expected behavior (native filter
params preserved). As written, it will pass even when the bug exists and will
fail once the implementation is fixed, so it locks in a regression rather than
preventing one.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=018980d9cb7b3d3aafef290ce39b8cd8e36fe32f745ab1790097110e2c03c289&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=018980d9cb7b3d3aafef290ce39b8cd8e36fe32f745ab1790097110e2c03c289&reaction=dislike'>👎</a>
##########
tests/unit_tests/reports/model_test.py:
##########
@@ -240,3 +240,176 @@ def test_report_generate_native_filter_no_column_name():
"ownState": {},
}
}
+
+
+def test_report_generate_native_filter_select_null_column():
+ report_schedule = ReportSchedule()
+ result = report_schedule._generate_native_filter(
+ "F1", "filter_select", None, ["US"]
+ )
+ assert result["F1"]["extraFormData"]["filters"][0]["col"] == ""
+ assert result["F1"]["filterState"]["label"] == ""
+
+
+def test_generate_native_filter_time_normal():
+ report_schedule = ReportSchedule()
+ result = report_schedule._generate_native_filter(
+ "F2", "filter_time", "ignored", ["Last week"]
+ )
+ assert result == {
+ "F2": {
+ "id": "F2",
+ "extraFormData": {"time_range": "Last week"},
+ "filterState": {"value": "Last week"},
+ "ownState": {},
+ }
+ }
+
+
+def test_generate_native_filter_time_empty_values():
+ # Regression: locks crash behavior until upstream fix (see PROJECT.md)
+ report_schedule = ReportSchedule()
+ with pytest.raises(IndexError):
+ report_schedule._generate_native_filter("F2", "filter_time",
"ignored", [])
Review Comment:
**Suggestion:** Expecting `IndexError` here hard-codes a known crash path as
"correct" behavior, which prevents this test suite from catching the real
runtime bug (empty time-filter values should be handled safely, not crash
report URL generation). [possible bug]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ❌ Scheduled report run crashes on empty time filter.
- ⚠️ Dashboard URL generation stops before screenshot capture.
```
</details>
```suggestion
result = report_schedule._generate_native_filter("F2", "filter_time",
"ignored", [])
assert result == {
"F2": {
"id": "F2",
"extraFormData": {"time_range": None},
"filterState": {"value": None},
"ownState": {},
}
}
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Create or update a report schedule payload containing
`extra.dashboard.nativeFilters`
with `filterType=\"filter_time\"` and `filterValues=[]`; `extra` is accepted
as generic
dict in `superset/reports/schemas.py:232-234` and `:52`.
2. Trigger report execution path where dashboard URLs are built;
`superset/commands/report/execute.py:266-269` calls
`get_native_filters_params()`.
3. `superset/reports/models.py:194-202` iterates native filters and calls
`_generate_native_filter(...)`.
4. In `superset/reports/models.py:221-223`, `filter_time` branch
dereferences `values[0]`;
with empty list this raises `IndexError`, aborting affected report run.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** tests/unit_tests/reports/model_test.py
**Line:** 272:273
**Comment:**
*Possible Bug: Expecting `IndexError` here hard-codes a known crash
path as "correct" behavior, which prevents this test suite from catching the
real runtime bug (empty time-filter values should be handled safely, not crash
report URL generation).
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=1e079de26eee7a9df095ecb13e95cba54ee1960830ee69b7f3fb4f9a3258b923&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=1e079de26eee7a9df095ecb13e95cba54ee1960830ee69b7f3fb4f9a3258b923&reaction=dislike'>👎</a>
##########
superset-frontend/src/features/alerts/AlertReportModal.test.tsx:
##########
@@ -1488,3 +2251,310 @@ test('tabs metadata overwrites seeded filter options',
async () => {
within(selectContainer).queryByTitle('Country'),
).not.toBeInTheDocument();
});
+
+test('selecting filter triggers chart data request with correct params', async
() => {
+ mockGetChartDataRequest.mockReset();
+ fetchMock.removeRoute(tabsEndpoint);
+ fetchMock.get(tabsEndpoint, tabsWithFilters, { name: tabsEndpoint });
+
+ mockGetChartDataRequest.mockResolvedValue({
+ json: { result: [{ data: [{ country: 'US' }, { country: 'UK' }] }] },
+ });
+
+ render(<AlertReportModal {...generateMockedProps(true, true)} />, {
+ useRedux: true,
+ });
+
+ userEvent.click(screen.getByTestId('contents-panel'));
+ await screen.findByText(/test dashboard/i);
+
+ // Wait for filter dropdown to be available
+ const filterDropdown = await waitFor(() =>
+ screen.getByRole('combobox', { name: /select filter/i }),
+ );
+
+ // Select the Country Filter using comboboxSelect pattern
+ await comboboxSelect(filterDropdown, 'Country Filter', () =>
+ document.querySelector(
+ '.ant-select-selection-item[title="Country Filter"]',
+ ),
+ );
+
+ // getChartDataRequest should have been called for filter values
+ await waitFor(() => {
+ expect(mockGetChartDataRequest).toHaveBeenCalledTimes(1);
+ });
+
+ // Verify it was called with correct datasource and groupby
+ const callArgs = mockGetChartDataRequest.mock.calls[0][0];
+ expect(callArgs.formData.groupby).toEqual(['country']);
+ expect(callArgs.formData.datasource).toBe('1__table');
+
+ mockGetChartDataRequest.mockReset();
+});
+
+test('selected filter excluded from other row dropdowns', async () => {
+ mockGetChartDataRequest.mockReset();
+ fetchMock.removeRoute(tabsEndpoint);
+ fetchMock.get(tabsEndpoint, tabsWithFilters, { name: tabsEndpoint });
+
+ mockGetChartDataRequest.mockResolvedValue({
+ json: { result: [{ data: [{ country: 'US' }] }] },
+ });
+
+ render(<AlertReportModal {...generateMockedProps(true, true)} />, {
+ useRedux: true,
+ });
+
+ userEvent.click(screen.getByTestId('contents-panel'));
+ await screen.findByText(/test dashboard/i);
+
+ // Wait for filter dropdown
+ const filterDropdown = await waitFor(() =>
+ screen.getByRole('combobox', { name: /select filter/i }),
+ );
+
+ // Select Country Filter in row 1
+ await comboboxSelect(filterDropdown, 'Country Filter', () =>
+ document.querySelector(
+ '.ant-select-selection-item[title="Country Filter"]',
+ ),
+ );
+
+ // Wait for getChartDataRequest to complete AND state update to propagate.
+ // The value dropdown becomes non-disabled once optionFilterValues is
populated.
+ await waitFor(() => {
+ expect(mockGetChartDataRequest).toHaveBeenCalled();
+ });
+ await waitFor(
+ () => {
+ const valueSelects = screen.queryAllByRole('combobox', {
+ name: /select value/i,
+ });
+ expect(valueSelects.length).toBeGreaterThan(0);
+ },
+ { timeout: 5000 },
+ );
+
+ // Add second filter row
+ const addFilterButton = screen.getByText(/apply another dashboard filter/i);
+ userEvent.click(addFilterButton);
+
+ // Wait for second row
+ await waitFor(() => {
+ expect(
+ screen.getAllByRole('combobox', { name: /select filter/i }),
+ ).toHaveLength(2);
+ });
+
+ // Open filter dropdown in row 2
+ const filterDropdowns = screen.getAllByRole('combobox', {
+ name: /select filter/i,
+ });
+ userEvent.click(filterDropdowns[1]);
+
+ // Country Filter should be excluded (dedup), City Filter should remain.
+ // Use the last virtual list since the first may belong to the closed row 1
dropdown.
+ await waitFor(() => {
+ const virtualLists = document.querySelectorAll('.rc-virtual-list');
+ const lastVirtualList = virtualLists[virtualLists.length - 1];
+ expect(
+ within(lastVirtualList as HTMLElement).queryByText('Country Filter'),
+ ).not.toBeInTheDocument();
+ expect(
+ within(lastVirtualList as HTMLElement).getByText('City Filter'),
+ ).toBeInTheDocument();
+ });
+
+ mockGetChartDataRequest.mockReset();
+}, 30000);
+
+test('invalid CC email blocks submit', async () => {
+ render(<AlertReportModal {...generateMockedProps(false, true, false)} />, {
+ useRedux: true,
+ });
+
+ // Wait for validation to fully propagate (fetch → state → validate →
enforce)
+ await screen.findByText('Edit alert');
+ await waitFor(
+ () => {
+ expect(screen.getByRole('button', { name: /save/i })).toBeEnabled();
+ },
+ { timeout: 5000 },
+ );
+
+ // Open notification panel and show CC field
+ userEvent.click(screen.getByTestId('notification-method-panel'));
+ const addCcButton = await screen.findByText(/Add CC Recipients/i);
+ userEvent.click(addCcButton);
+
+ // Type invalid email in CC field
+ const ccInput = await screen.findByTestId('cc');
+ userEvent.type(ccInput, 'not-an-email');
+ fireEvent.blur(ccInput);
+
+ // Save should now be disabled due to invalid email format
+ await waitFor(() => {
+ expect(screen.getByRole('button', { name: /save/i })).toBeDisabled();
+ });
+});
+
+test('invalid BCC email blocks submit', async () => {
+ render(<AlertReportModal {...generateMockedProps(false, true, false)} />, {
+ useRedux: true,
+ });
+
+ // Wait for validation to fully propagate (fetch → state → validate →
enforce)
+ await screen.findByText('Edit alert');
+ await waitFor(
+ () => {
+ expect(screen.getByRole('button', { name: /save/i })).toBeEnabled();
+ },
+ { timeout: 5000 },
+ );
+
+ // Open notification panel and show BCC field
+ userEvent.click(screen.getByTestId('notification-method-panel'));
+ const addBccButton = await screen.findByText(/Add BCC Recipients/i);
+ userEvent.click(addBccButton);
+
+ // Type invalid email in BCC field
+ const bccInput = await screen.findByTestId('bcc');
+ userEvent.type(bccInput, 'not-an-email');
+ fireEvent.blur(bccInput);
+
+ // Save should now be disabled due to invalid email format
+ await waitFor(() => {
+ expect(screen.getByRole('button', { name: /save/i })).toBeDisabled();
+ });
+});
+
+test('invalid saved anchor is reset on dashboard load', async () => {
+ fetchMock.removeRoute(tabsEndpoint);
+ fetchMock.get(tabsEndpoint, tabsWithFilters, { name: tabsEndpoint });
+
+ const props = generateMockedProps(true, true);
+ const editProps = {
+ ...props,
+ alert: { ...validAlert, id: 7 },
+ };
+
+ render(<AlertReportModal {...editProps} />, {
+ useRedux: true,
+ });
+
+ userEvent.click(screen.getByTestId('contents-panel'));
+ await screen.findByText(/test dashboard/i);
+
+ // Wait for dashboard tabs to load
+ await waitFor(() => {
+
expect(fetchMock.callHistory.calls(tabsEndpoint).length).toBeGreaterThan(0);
+ });
+
+ // The saved anchor 'TAB_999' is NOT in the dashboard's tabs (TAB_1, TAB_2),
+ // so it should be reset to undefined. Tab selector shows placeholder.
+ await waitFor(() => {
+ expect(screen.getByText(/select a tab/i)).toBeInTheDocument();
+ });
+
+ // TAB_999 should NOT appear as a selected value anywhere
+ expect(
+ document.querySelector('.ant-select-selection-item[title="TAB_999"]'),
+ ).not.toBeInTheDocument();
+});
+
+test('clearing notification recipients disables submit and prevents API call',
async () => {
+ fetchMock.put(
+ 'glob:*/api/v1/report/2',
+ { id: 2, result: {} },
+ { name: 'put-no-recipients' },
+ );
+
+ render(<AlertReportModal {...generateMockedProps(false, true, false)} />, {
+ useRedux: true,
+ });
+
+ // Wait for all validation to pass (5 checkmarks = fully valid alert)
+ await waitFor(() => {
+ expect(
+ screen.queryAllByRole('img', { name: /check-circle/i }),
+ ).toHaveLength(5);
+ });
+
+ // Save should be enabled initially
+ expect(screen.getByRole('button', { name: /save/i })).toBeEnabled();
+
+ // Open notification panel and clear the recipients field
+ userEvent.click(screen.getByTestId('notification-method-panel'));
+ const recipientInput = await screen.findByTestId('recipients');
+ userEvent.clear(recipientInput);
+ fireEvent.blur(recipientInput);
+
+ // Save should be disabled — empty recipients block submission
+ await waitFor(() => {
+ expect(screen.getByRole('button', { name: /save/i })).toBeDisabled();
+ });
+
+ // No PUT was emitted — the disabled button prevented any payload from being
sent.
+ // This confirms that a payload with empty recipients is never transmitted.
+ expect(fetchMock.callHistory.calls('put-no-recipients')).toHaveLength(0);
+
+ fetchMock.removeRoute('put-no-recipients');
+});
+
+test('empty recipients array is omitted from payload', () => {
+ // Direct test of the payload cleanup logic from AlertReportModal (lines
941-943).
+ // UI validation prevents submitting with empty recipients, so this exercises
+ // the defensive `delete data.recipients` branch in isolation.
+ const data: Record<string, unknown> = {
+ name: 'Test Alert',
+ recipients: [],
+ type: 'Alert',
+ };
+
+ // Mirrors AlertReportModal.tsx lines 941-943
+ if (data.recipients && !(data.recipients as unknown[]).length) {
+ delete data.recipients;
+ }
+
+ expect(data).not.toHaveProperty('recipients');
+ expect(data).toHaveProperty('name', 'Test Alert');
+});
+
+test('non-empty recipients array is preserved in payload', () => {
+ // Complementary to the empty-recipients test: verifies that populated
+ // recipients survive the cleanup logic unchanged.
+ const data: Record<string, unknown> = {
+ name: 'Test Alert',
+ recipients: [{ type: 'Email', recipient_config_json: { target:
'[email protected]' } }],
+ type: 'Alert',
+ };
+
+ if (data.recipients && !(data.recipients as unknown[]).length) {
+ delete data.recipients;
+ }
+
+ expect(data).toHaveProperty('recipients');
+ expect((data.recipients as unknown[]).length).toBe(1);
+});
+
+test('modal reopen resets local state', async () => {
+ const props = generateMockedProps(true, false, true);
+
+ render(<AlertReportModal {...props} />, { useRedux: true });
+
+ // Type a name to dirty the form
+ const nameInput = screen.getByPlaceholderText(/enter report name/i);
+ userEvent.type(nameInput, 'Temporary Report');
+ expect(nameInput).toHaveValue('Temporary Report');
Review Comment:
**Suggestion:** The test types into the input and immediately asserts the
value without awaiting typing. Since `userEvent.type` is async, this can assert
too early and intermittently fail. [possible bug]
<details>
<summary><b>Severity Level:</b> Major ⚠️</summary>
```mdx
- ⚠️ Modal reopen test can intermittently fail before cleanup assertions.
- ⚠️ CI reliability drops for local-state reset coverage.
```
</details>
```suggestion
await userEvent.type(nameInput, 'Temporary Report');
expect(nameInput).toHaveValue('Temporary Report');
```
<details>
<summary><b>Steps of Reproduction ✅ </b></summary>
```mdx
1. Run test `modal reopen resets local state` at
`AlertReportModal.test.tsx:2541`.
2. The test types into report-name input and immediately asserts value at
`:2548-2549`.
3. `userEvent` is imported from `spec/helpers/testing-library.tsx:137`
(re-export of
`@testing-library/user-event`), so typing is asynchronous.
4. Under slower execution, assertion can run before typing settles, causing
intermittent
failure before cancel/reset assertions.
```
</details>
<details>
<summary><b>Prompt for AI Agent 🤖 </b></summary>
```mdx
This is a comment left during a code review.
**Path:** superset-frontend/src/features/alerts/AlertReportModal.test.tsx
**Line:** 2548:2549
**Comment:**
*Possible Bug: The test types into the input and immediately asserts
the value without awaiting typing. Since `userEvent.type` is async, this can
assert too early and intermittently fail.
Validate the correctness of the flagged issue. If correct, How can I resolve
this? If you propose a fix, implement it and please make it concise.
```
</details>
<a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=7faa6e9135cd98dcb99a426a69da14417ecd33e90f1aa9fdcb4b3ed3c87d7d8d&reaction=like'>👍</a>
| <a
href='https://app.codeant.ai/feedback?pr_url=https%3A%2F%2Fgithub.com%2Fapache%2Fsuperset%2Fpull%2F38591&comment_hash=7faa6e9135cd98dcb99a426a69da14417ecd33e90f1aa9fdcb4b3ed3c87d7d8d&reaction=dislike'>👎</a>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]