bito-code-review[bot] commented on code in PR #38890:
URL: https://github.com/apache/superset/pull/38890#discussion_r2997347790
##########
tests/unit_tests/mcp_service/dashboard/tool/test_dashboard_generation.py:
##########
@@ -813,17 +809,15 @@ async def test_add_chart_to_tabbed_dashboard(
"DASHBOARD_VERSION_KEY": "v2",
}
)
+ mock_find_dashboard.return_value = mock_dashboard
Review Comment:
<div>
<div id="suggestion">
<div id="issue"><b>Incorrect mock setup for consecutive DAO calls</b></div>
<div id="fix">
The mock for DashboardDAO.find_by_id is set to always return the initial
dashboard, but the code calls it twice: first for validation (should return
initial) and second for re-fetch after update (should return updated). This
causes the tool to return a dashboard with outdated chart data. Use side_effect
to return different values on consecutive calls.
</div>
</div>
<small><i>Code Review Run #fed222</i></small>
</div>
---
Should Bito avoid suggestions like this for future reviews? (<a
href=https://alpha.bito.ai/home/ai-agents/review-rules>Manage Rules</a>)
- [ ] Yes, avoid them
##########
superset/mcp_service/dashboard/tool/add_chart_to_existing_dashboard.py:
##########
@@ -534,7 +509,7 @@ def add_chart_to_existing_dashboard(
error=None,
)
- except (CommandException, SQLAlchemyError, KeyError, ValueError) as e:
+ except Exception as e:
Review Comment:
<div>
<div id="suggestion">
<div id="issue"><b>Overly broad exception handling</b></div>
<div id="fix">
Line 512 catches a blind `Exception` which is too broad. Specify the
expected exception types (e.g., `ValueError`, `KeyError`, `SQLAlchemyError`) to
improve error handling clarity and maintainability.
</div>
<details>
<summary>
<b>Code suggestion</b>
</summary>
<blockquote>Check the AI-generated fix before applying</blockquote>
<div id="code">
````suggestion
error=None,
)
except (ValueError, KeyError, TypeError, AttributeError) as e:
````
</div>
</details>
</div>
<small><i>Code Review Run #fed222</i></small>
</div>
---
Should Bito avoid suggestions like this for future reviews? (<a
href=https://alpha.bito.ai/home/ai-agents/review-rules>Manage Rules</a>)
- [ ] Yes, avoid them
##########
tests/unit_tests/mcp_service/dashboard/tool/test_dashboard_generation.py:
##########
@@ -915,17 +909,15 @@ async def test_add_chart_to_specific_tab_by_name(
"DASHBOARD_VERSION_KEY": "v2",
}
)
+ mock_find_dashboard.return_value = mock_dashboard
Review Comment:
<div>
<div id="suggestion">
<div id="issue"><b>Incorrect mock setup for consecutive DAO calls</b></div>
<div id="fix">
Same issue as above: the mock needs to return different dashboards for
validation and re-fetch calls.
</div>
</div>
<small><i>Code Review Run #fed222</i></small>
</div>
---
Should Bito avoid suggestions like this for future reviews? (<a
href=https://alpha.bito.ai/home/ai-agents/review-rules>Manage Rules</a>)
- [ ] Yes, avoid them
##########
tests/unit_tests/mcp_service/dashboard/tool/test_dashboard_generation.py:
##########
@@ -1228,122 +1402,58 @@ def test_long_title_is_truncated(self):
class TestDashboardSerializationEagerLoading:
- """Tests for eager loading fix in dashboard serialization paths.
-
- The re-fetch uses DashboardDAO.find_by_id() with query_options for
- eager loading. A try/except around the DAO call handles "Can't
- reconnect until invalid transaction is rolled back" errors in
- multi-tenant environments by rolling back and falling back to the
- original dashboard object.
- """
+ """Tests for eager loading fix in dashboard serialization paths."""
- @patch("superset.models.dashboard.Dashboard")
@patch("superset.daos.dashboard.DashboardDAO.find_by_id")
- @patch("superset.db.session")
- @pytest.mark.asyncio
- async def test_generate_dashboard_refetches_via_dao(
- self, mock_db_session, mock_find_by_id, mock_dashboard_cls, mcp_server
- ):
- """generate_dashboard re-fetches dashboard via
DashboardDAO.find_by_id()
+ def test_generate_dashboard_refetches_via_dao(self, mock_find_by_id):
+ """generate_dashboard re-fetches dashboard via DashboardDAO.find_by_id
with eager-loaded slice relationships before serialization."""
- charts = [_mock_chart(id=1, slice_name="Chart 1")]
- dashboard = _mock_dashboard(id=10, title="Refetch Test")
- _setup_generate_dashboard_mocks(
- mock_db_session, mock_find_by_id, mock_dashboard_cls, charts,
dashboard
- )
+ refetched_dashboard = _mock_dashboard()
+ refetched_chart = _mock_chart(id=1, slice_name="Refetched Chart")
+ refetched_dashboard.slices = [refetched_chart]
- request = {"chart_ids": [1], "dashboard_title": "Refetch Test"}
- async with Client(mcp_server) as client:
- result = await client.call_tool("generate_dashboard", {"request":
request})
+ mock_find_by_id.return_value = refetched_dashboard
- assert result.structured_content["error"] is None
- # Verify DashboardDAO.find_by_id was called for re-fetch
- mock_find_by_id.assert_called()
+ from superset.daos.dashboard import DashboardDAO
- @patch("superset.models.dashboard.Dashboard")
- @patch("superset.daos.dashboard.DashboardDAO.find_by_id")
- @patch("superset.db.session")
- @pytest.mark.asyncio
- async def test_generate_dashboard_refetch_sqlalchemy_error_rollback(
- self, mock_db_session, mock_find_by_id, mock_dashboard_cls, mcp_server
- ):
- """When the DAO re-fetch raises SQLAlchemyError, the session is
- rolled back and a minimal response is returned with only scalar
- attributes (no owners/tags/charts that would trigger lazy-loading)."""
- from sqlalchemy.exc import SQLAlchemyError
-
- charts = [_mock_chart(id=1, slice_name="Chart 1")]
- dashboard = _mock_dashboard(id=10, title="Rollback Test")
- _setup_generate_dashboard_mocks(
- mock_db_session, mock_find_by_id, mock_dashboard_cls, charts,
dashboard
+ result = (
+ DashboardDAO.find_by_id(1, query_options=["dummy"]) or
_mock_dashboard()
)
- # Make the DAO re-fetch raise SQLAlchemyError
- mock_find_by_id.side_effect = SQLAlchemyError("session error")
- request = {"chart_ids": [1], "dashboard_title": "Rollback Test"}
- async with Client(mcp_server) as client:
- result = await client.call_tool("generate_dashboard", {"request":
request})
+ assert result is refetched_dashboard
+ mock_find_by_id.assert_called_once_with(1, query_options=["dummy"])
- data = result.structured_content
- assert data["error"] is None
- mock_db_session.rollback.assert_called()
- # Minimal response should have scalar fields
- dash = data["dashboard"]
- assert dash["id"] == 10
- assert dash["dashboard_title"] == "Rollback Test"
- assert "/superset/dashboard/10/" in data["dashboard_url"]
- # Relationship fields should be empty (defaults)
- assert dash["owners"] == []
- assert dash["tags"] == []
- assert dash["charts"] == []
-
- @patch("superset.commands.dashboard.update.UpdateDashboardCommand")
@patch("superset.daos.dashboard.DashboardDAO.find_by_id")
- @patch("superset.db.session")
- @pytest.mark.asyncio
- async def test_add_chart_refetch_sqlalchemy_error_rollback(
- self, mock_db_session, mock_find_dashboard, mock_update_command,
mcp_server
- ):
- """When the DAO re-fetch raises SQLAlchemyError after adding a chart,
- the session is rolled back and a minimal response is returned with
- only scalar attributes and position info."""
- from sqlalchemy.exc import SQLAlchemyError
+ def test_add_chart_refetches_dashboard_via_dao(self, mock_find_by_id):
+ """add_chart_to_existing_dashboard re-fetches dashboard via
+ DashboardDAO.find_by_id with eager-loaded slice relationships."""
+ original_dashboard = _mock_dashboard()
+ refetched_dashboard = _mock_dashboard()
+ refetched_dashboard.slices = [_mock_chart()]
- mock_dashboard = _mock_dashboard(id=1, title="Dashboard")
- mock_dashboard.slices = []
- mock_dashboard.position_json = "{}"
+ mock_find_by_id.return_value = refetched_dashboard
- mock_chart = _mock_chart(id=15)
- mock_db_session.get.return_value = mock_chart
+ from superset.daos.dashboard import DashboardDAO
+
+ result = (
+ DashboardDAO.find_by_id(original_dashboard.id,
query_options=["dummy"])
+ or original_dashboard
+ )
- updated = _mock_dashboard(id=1, title="Dashboard")
- updated.slices = [_mock_chart(id=15)]
- mock_update_command.return_value.run.return_value = updated
+ assert result is refetched_dashboard
- # First call returns dashboard (validation), second raises (re-fetch)
- mock_find_dashboard.side_effect = [
- mock_dashboard,
- SQLAlchemyError("session error"),
- ]
+ @patch("superset.daos.dashboard.DashboardDAO.find_by_id")
+ def test_add_chart_falls_back_on_refetch_failure(self, mock_find_by_id):
+ """add_chart_to_existing_dashboard falls back to original dashboard
+ if DashboardDAO.find_by_id returns None."""
+ original_dashboard = _mock_dashboard()
+ mock_find_by_id.return_value = None
- request = {"dashboard_id": 1, "chart_id": 15}
- async with Client(mcp_server) as client:
- result = await client.call_tool(
- "add_chart_to_existing_dashboard", {"request": request}
- )
+ from superset.daos.dashboard import DashboardDAO
+
+ result = (
+ DashboardDAO.find_by_id(original_dashboard.id,
query_options=["dummy"])
+ or original_dashboard
+ )
- data = result.structured_content
- assert data["error"] is None
- mock_db_session.rollback.assert_called()
- # Minimal response should have scalar fields
- dash = data["dashboard"]
- assert dash["id"] == 1
- assert dash["dashboard_title"] == "Dashboard"
- assert "/superset/dashboard/1/" in data["dashboard_url"]
- # Position info should still be returned
- assert data["position"] is not None
- assert "chart_key" in data["position"]
- # Relationship fields should be empty (defaults)
- assert dash["owners"] == []
- assert dash["tags"] == []
- assert dash["charts"] == []
+ assert result is original_dashboard
Review Comment:
<div>
<div id="suggestion">
<div id="issue"><b>Inadequate Unit Tests for Tool Behavior</b></div>
<div id="fix">
The new TestDashboardSerializationEagerLoading tests mock
DashboardDAO.find_by_id but don't invoke the actual tool functions they claim
to test. Instead of verifying re-fetch logic in generate_dashboard or
add_chart_to_existing_dashboard, they only check mock call counts and return
values. This violates test quality standards by not asserting real behavior
changes.
</div>
</div>
<div id="suggestion">
<div id="issue"><b>Missing SQLAlchemy Error Handling Tests</b></div>
<div id="fix">
The removed tests covered critical error handling for SQLAlchemyError during
dashboard re-fetch after creation/update, which is essential for the 'eager
loading fix' in multi-tenant setups. The new tests completely omit this
coverage, risking undetected failures that could cause session corruption or
incorrect responses.
</div>
</div>
<small><i>Code Review Run #fed222</i></small>
</div>
---
Should Bito avoid suggestions like this for future reviews? (<a
href=https://alpha.bito.ai/home/ai-agents/review-rules>Manage Rules</a>)
- [ ] Yes, avoid them
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]