codeant-ai-for-open-source[bot] commented on code in PR #37042:
URL: https://github.com/apache/superset/pull/37042#discussion_r2680990144


##########
superset/commands/chart/delete.py:
##########
@@ -68,3 +73,132 @@ def validate(self) -> None:
                 security_manager.raise_for_ownership(model)
             except SupersetSecurityException as ex:
                 raise ChartForbiddenError() from ex
+
+    def _cleanup_dashboard_metadata(  # noqa: C901
+        self, chart_id: int
+    ) -> None:
+        """
+        Remove references to this chart from all dashboard metadata.
+
+        When a chart is deleted, dashboards may still contain references to the
+        chart ID in various metadata fields (expanded_slices, filter_scopes, 
etc.).
+        This method cleans up those references to prevent issues during 
dashboard
+        export/import.
+        """
+        # Find all dashboards that contain this chart
+        dashboards = (
+            db.session.query(Dashboard)
+            .filter(Dashboard.slices.any(id=chart_id))  # type: 
ignore[attr-defined]
+            .all()
+        )
+
+        for dashboard in dashboards:
+            metadata = dashboard.params_dict
+            modified = False
+
+            # Clean up expanded_slices
+            if "expanded_slices" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["expanded_slices"]:
+                    del metadata["expanded_slices"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from expanded_slices in dashboard 
%s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up timed_refresh_immune_slices
+            if "timed_refresh_immune_slices" in metadata:
+                if chart_id in metadata["timed_refresh_immune_slices"]:
+                    metadata["timed_refresh_immune_slices"].remove(chart_id)
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from timed_refresh_immune_slices "
+                        "in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up filter_scopes
+            if "filter_scopes" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["filter_scopes"]:
+                    del metadata["filter_scopes"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from filter_scopes in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+                # Also clean from immune lists
+                for filter_scope in metadata["filter_scopes"].values():
+                    for attributes in filter_scope.values():
+                        if chart_id in attributes.get("immune", []):
+                            attributes["immune"].remove(chart_id)
+                            modified = True
+
+            # Clean up default_filters
+            if "default_filters" in metadata:
+                default_filters = json.loads(metadata["default_filters"])

Review Comment:
   **Suggestion:** The code calls json.loads on `metadata["default_filters"]` 
but `dashboard.params_dict` can already contain `default_filters` as a dict 
(not a JSON string); calling `json.loads` on a dict will raise a TypeError at 
runtime. Detect the type and only call `json.loads` when the value is a string, 
falling back to an empty dict on parse errors. [type error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
                   raw_default_filters = metadata.get("default_filters") or {}
                   if isinstance(raw_default_filters, str):
                       try:
                           default_filters = json.loads(raw_default_filters)
                       except Exception:
                           default_filters = {}
                   elif isinstance(raw_default_filters, dict):
                       default_filters = raw_default_filters
                   else:
                       default_filters = {}
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The existing code can raise at runtime if metadata["default_filters"] is 
already a dict (json.loads expects a string) or if it's malformed JSON. The 
proposed change guards the load, handles non-string values and parse errors, 
and preserves behavior by writing a JSON string back into metadata. This fixes 
a real runtime TypeError/ValueError visible in the PR diff.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/commands/chart/delete.py
   **Line:** 143:143
   **Comment:**
        *Type Error: The code calls json.loads on `metadata["default_filters"]` 
but `dashboard.params_dict` can already contain `default_filters` as a dict 
(not a JSON string); calling `json.loads` on a dict will raise a TypeError at 
runtime. Detect the type and only call `json.loads` when the value is a string, 
falling back to an empty dict on parse errors.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/commands/dashboard/importers/v1/utils.py:
##########
@@ -75,7 +75,9 @@ def update_id_refs(  # pylint: disable=too-many-locals  # 
noqa: C901
     metadata = fixed.get("metadata", {})
     if "timed_refresh_immune_slices" in metadata:
         metadata["timed_refresh_immune_slices"] = [
-            id_map[old_id] for old_id in 
metadata["timed_refresh_immune_slices"]
+            id_map[old_id]
+            for old_id in metadata["timed_refresh_immune_slices"]

Review Comment:
   **Suggestion:** Type mismatch when using `old_id` as a dict key: entries in 
`metadata["timed_refresh_immune_slices"]` are often strings (e.g. "123") while 
`id_map` uses integer keys, so checking `old_id in id_map` will fail and 
lookups will raise KeyError or silently drop entries; convert `old_id` to int 
when indexing and membership-checking to ensure correct mapping. [type error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
               id_map[int(old_id)]
               for old_id in metadata["timed_refresh_immune_slices"]
               if int(old_id) in id_map
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   This comprehension currently assumes the elements in 
metadata["timed_refresh_immune_slices"]
   are of the same type as id_map's keys. In the file id_map keys are integers 
(built from
   chartId), but JSON-exported metadata often contains numeric IDs as strings. 
Casting to int
   before membership and lookup fixes the real, visible mismatch and will 
correctly remap
   slice IDs. Note this change assumes values are numeric-like strings; if 
non-numeric values
   can appear you'll need to guard int() calls.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/commands/dashboard/importers/v1/utils.py
   **Line:** 78:79
   **Comment:**
        *Type Error: Type mismatch when using `old_id` as a dict key: entries 
in `metadata["timed_refresh_immune_slices"]` are often strings (e.g. "123") 
while `id_map` uses integer keys, so checking `old_id in id_map` will fail and 
lookups will raise KeyError or silently drop entries; convert `old_id` to int 
when indexing and membership-checking to ensure correct mapping.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/dashboards/commands/importers/v1/utils_test.py:
##########
@@ -199,3 +199,131 @@ def 
test_update_id_refs_cross_filter_handles_string_excluded():
     fixed = update_id_refs(config, chart_ids, dataset_info)
     # Should not raise and should remap key
     assert "1" in fixed["metadata"]["chart_configuration"]
+
+
+def test_update_id_refs_expanded_slices_with_missing_chart():
+    """
+    Test that missing charts in expanded_slices are gracefully skipped.
+
+    When a chart is deleted from a workspace, dashboards may still contain
+    references to the deleted chart ID in expanded_slices metadata. This test
+    verifies that the import process skips missing chart references instead of
+    raising a KeyError.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "expanded_slices": {
+                "101": True,  # This chart exists in the import
+                "102": False,  # This chart was deleted and doesn't exist
+                "103": True,  # Another deleted chart
+            },
+        },
+    }
+    chart_ids = {"uuid1": 1}  # Only uuid1 exists in the import
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # Should only include the existing chart, missing charts are skipped
+    assert fixed["metadata"]["expanded_slices"] == {"1": True}
+    # Should not raise KeyError for missing charts 102 and 103
+
+
+def test_update_id_refs_timed_refresh_immune_slices_with_missing_chart():
+    """
+    Test that missing charts in timed_refresh_immune_slices are gracefully 
skipped.
+
+    When a chart is deleted from a workspace, dashboards may still contain
+    references to the deleted chart ID in timed_refresh_immune_slices metadata.
+    This test verifies that the import process skips missing chart references
+    instead of raising a KeyError.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+            "CHART2": {
+                "id": "CHART2",
+                "meta": {"chartId": 102, "uuid": "uuid2"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "timed_refresh_immune_slices": [
+                101,  # This chart exists
+                102,  # This chart exists
+                103,  # This chart was deleted and doesn't exist
+                104,  # Another deleted chart
+            ],
+        },
+    }
+    chart_ids = {"uuid1": 1, "uuid2": 2}  # Only uuid1 and uuid2 exist
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # Should only include existing charts, missing charts are skipped
+    assert fixed["metadata"]["timed_refresh_immune_slices"] == [1, 2]
+    # Should not raise KeyError for missing charts 103 and 104
+
+
+def test_update_id_refs_multiple_missing_chart_references():
+    """
+    Test that multiple metadata fields with missing charts are all handled 
gracefully.
+
+    This comprehensive test verifies that all metadata fields properly skip
+    missing chart references during import.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "expanded_slices": {
+                "101": True,
+                "999": False,  # Missing chart
+            },
+            "timed_refresh_immune_slices": [101, 999],  # 999 is missing
+            "filter_scopes": {
+                "101": {"region": {"immune": [999]}},  # 999 is missing
+                "999": {"region": {"immune": [101]}},  # Key 999 is missing
+            },
+            "default_filters": '{"101": {"col": "value"}, "999": {"col": 
"other"}}',
+            "native_filter_configuration": [
+                {"scope": {"excluded": [101, 999]}}  # 999 is missing
+            ],
+        },
+    }
+    chart_ids = {"uuid1": 1}  # Only uuid1 exists
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # All missing chart references should be gracefully skipped
+    assert fixed["metadata"]["expanded_slices"] == {"1": True}
+    assert fixed["metadata"]["timed_refresh_immune_slices"] == [1]
+    assert fixed["metadata"]["filter_scopes"] == {"1": {"region": {"immune": 
[]}}}
+    assert fixed["metadata"]["default_filters"] == '{"1": {"col": "value"}}'

Review Comment:
   **Suggestion:** Comparing JSON as a raw string is brittle: different JSON 
serializers or minor formatting changes (spacing, key order) can break the 
assertion even when the logical structure is identical. Parse the JSON and 
compare the resulting dict instead. [possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
       import json
       assert json.loads(fixed["metadata"]["default_filters"]) == {"1": {"col": 
"value"}}
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   Comparing raw JSON text is brittle — different serializers, spacing, or key
   ordering can make two semantically-equal payloads differ as strings.
   The test exercises semantics (the id remapping), not the exact whitespace
   of json.dumps output, so parsing and comparing dicts is more robust and
   less likely to produce false negatives.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/dashboards/commands/importers/v1/utils_test.py
   **Line:** 326:326
   **Comment:**
        *Possible Bug: Comparing JSON as a raw string is brittle: different 
JSON serializers or minor formatting changes (spacing, key order) can break the 
assertion even when the logical structure is identical. Parse the JSON and 
compare the resulting dict instead.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/commands/chart/delete.py:
##########
@@ -68,3 +73,132 @@ def validate(self) -> None:
                 security_manager.raise_for_ownership(model)
             except SupersetSecurityException as ex:
                 raise ChartForbiddenError() from ex
+
+    def _cleanup_dashboard_metadata(  # noqa: C901
+        self, chart_id: int
+    ) -> None:
+        """
+        Remove references to this chart from all dashboard metadata.
+
+        When a chart is deleted, dashboards may still contain references to the
+        chart ID in various metadata fields (expanded_slices, filter_scopes, 
etc.).
+        This method cleans up those references to prevent issues during 
dashboard
+        export/import.
+        """
+        # Find all dashboards that contain this chart
+        dashboards = (
+            db.session.query(Dashboard)
+            .filter(Dashboard.slices.any(id=chart_id))  # type: 
ignore[attr-defined]
+            .all()
+        )
+
+        for dashboard in dashboards:
+            metadata = dashboard.params_dict
+            modified = False
+
+            # Clean up expanded_slices
+            if "expanded_slices" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["expanded_slices"]:
+                    del metadata["expanded_slices"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from expanded_slices in dashboard 
%s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up timed_refresh_immune_slices
+            if "timed_refresh_immune_slices" in metadata:
+                if chart_id in metadata["timed_refresh_immune_slices"]:
+                    metadata["timed_refresh_immune_slices"].remove(chart_id)
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from timed_refresh_immune_slices "
+                        "in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up filter_scopes
+            if "filter_scopes" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["filter_scopes"]:
+                    del metadata["filter_scopes"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from filter_scopes in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+                # Also clean from immune lists
+                for filter_scope in metadata["filter_scopes"].values():
+                    for attributes in filter_scope.values():
+                        if chart_id in attributes.get("immune", []):
+                            attributes["immune"].remove(chart_id)
+                            modified = True
+
+            # Clean up default_filters
+            if "default_filters" in metadata:
+                default_filters = json.loads(metadata["default_filters"])
+                chart_id_str = str(chart_id)
+                if chart_id_str in default_filters:
+                    del default_filters[chart_id_str]
+                    metadata["default_filters"] = json.dumps(default_filters)
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from default_filters in dashboard 
%s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up native_filter_configuration scope exclusions
+            if "native_filter_configuration" in metadata:
+                for native_filter in metadata["native_filter_configuration"]:
+                    scope_excluded = native_filter.get("scope", 
{}).get("excluded", [])
+                    if chart_id in scope_excluded:
+                        scope_excluded.remove(chart_id)
+                        modified = True

Review Comment:
   **Suggestion:** The code mutates `scope_excluded` returned from nested dict 
lookups but assumes elements are ints; if IDs are strings this will not remove 
matches, and it doesn't always write the modified list back into the nested 
`scope` dict. Rebuild the `excluded` list by filtering with string comparison 
and assign it back into the native filter's scope. [logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
                       scope = native_filter.get("scope") or {}
                       excluded = scope.get("excluded") or []
                       new_excluded = [v for v in excluded if str(v) != 
str(chart_id)]
                       if len(new_excluded) != len(excluded):
                           scope["excluded"] = new_excluded
                           native_filter["scope"] = scope
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The existing code mutates the retrieved list in-place and doesn't guard 
against IDs as strings or the case where get(...) returned a default list not 
wired into native_filter (no assignment). Rebuilding and assigning the filtered 
list back into native_filter["scope"]["excluded"] is safer and ensures the 
change persists.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/commands/chart/delete.py
   **Line:** 159:161
   **Comment:**
        *Logic Error: The code mutates `scope_excluded` returned from nested 
dict lookups but assumes elements are ints; if IDs are strings this will not 
remove matches, and it doesn't always write the modified list back into the 
nested `scope` dict. Rebuild the `excluded` list by filtering with string 
comparison and assign it back into the native filter's scope.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/commands/chart/delete.py:
##########
@@ -68,3 +73,132 @@ def validate(self) -> None:
                 security_manager.raise_for_ownership(model)
             except SupersetSecurityException as ex:
                 raise ChartForbiddenError() from ex
+
+    def _cleanup_dashboard_metadata(  # noqa: C901
+        self, chart_id: int
+    ) -> None:
+        """
+        Remove references to this chart from all dashboard metadata.
+
+        When a chart is deleted, dashboards may still contain references to the
+        chart ID in various metadata fields (expanded_slices, filter_scopes, 
etc.).
+        This method cleans up those references to prevent issues during 
dashboard
+        export/import.
+        """
+        # Find all dashboards that contain this chart
+        dashboards = (
+            db.session.query(Dashboard)
+            .filter(Dashboard.slices.any(id=chart_id))  # type: 
ignore[attr-defined]
+            .all()
+        )
+
+        for dashboard in dashboards:
+            metadata = dashboard.params_dict
+            modified = False
+
+            # Clean up expanded_slices
+            if "expanded_slices" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["expanded_slices"]:
+                    del metadata["expanded_slices"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from expanded_slices in dashboard 
%s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up timed_refresh_immune_slices
+            if "timed_refresh_immune_slices" in metadata:
+                if chart_id in metadata["timed_refresh_immune_slices"]:
+                    metadata["timed_refresh_immune_slices"].remove(chart_id)
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from timed_refresh_immune_slices "
+                        "in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up filter_scopes
+            if "filter_scopes" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["filter_scopes"]:
+                    del metadata["filter_scopes"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from filter_scopes in dashboard %s",
+                        chart_id,
+                        dashboard.id,
+                    )
+                # Also clean from immune lists
+                for filter_scope in metadata["filter_scopes"].values():
+                    for attributes in filter_scope.values():
+                        if chart_id in attributes.get("immune", []):
+                            attributes["immune"].remove(chart_id)

Review Comment:
   **Suggestion:** The code directly calls `attributes.get("immune", [])` and 
`remove(chart_id)` which will fail to remove entries if the immune list stores 
IDs as strings (or may error if `immune` isn't a list). Normalize or validate 
`immune` as a list and rebuild it excluding entries matching the chart id 
(compare as strings) to avoid leaving stale references or raising errors. 
[logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
                           immune = attributes.get("immune")
                           if not isinstance(immune, list):
                               continue
                           new_immune = [v for v in immune if str(v) != 
str(chart_id)]
                           if len(new_immune) != len(immune):
                               attributes["immune"] = new_immune
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   attributes.get("immune", []) may not be a list or may contain string IDs; 
calling remove with an int can be a no-op or raise. The suggested normalization 
and rebuild ensures correct removal without raising and handles mixed types 
reliably.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/commands/chart/delete.py
   **Line:** 137:138
   **Comment:**
        *Logic Error: The code directly calls `attributes.get("immune", [])` 
and `remove(chart_id)` which will fail to remove entries if the immune list 
stores IDs as strings (or may error if `immune` isn't a list). Normalize or 
validate `immune` as a list and rebuild it excluding entries matching the chart 
id (compare as strings) to avoid leaving stale references or raising errors.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
superset/commands/chart/delete.py:
##########
@@ -68,3 +73,132 @@ def validate(self) -> None:
                 security_manager.raise_for_ownership(model)
             except SupersetSecurityException as ex:
                 raise ChartForbiddenError() from ex
+
+    def _cleanup_dashboard_metadata(  # noqa: C901
+        self, chart_id: int
+    ) -> None:
+        """
+        Remove references to this chart from all dashboard metadata.
+
+        When a chart is deleted, dashboards may still contain references to the
+        chart ID in various metadata fields (expanded_slices, filter_scopes, 
etc.).
+        This method cleans up those references to prevent issues during 
dashboard
+        export/import.
+        """
+        # Find all dashboards that contain this chart
+        dashboards = (
+            db.session.query(Dashboard)
+            .filter(Dashboard.slices.any(id=chart_id))  # type: 
ignore[attr-defined]
+            .all()
+        )
+
+        for dashboard in dashboards:
+            metadata = dashboard.params_dict
+            modified = False
+
+            # Clean up expanded_slices
+            if "expanded_slices" in metadata:
+                chart_id_str = str(chart_id)
+                if chart_id_str in metadata["expanded_slices"]:
+                    del metadata["expanded_slices"][chart_id_str]
+                    modified = True
+                    logger.info(
+                        "Removed chart %s from expanded_slices in dashboard 
%s",
+                        chart_id,
+                        dashboard.id,
+                    )
+
+            # Clean up timed_refresh_immune_slices
+            if "timed_refresh_immune_slices" in metadata:
+                if chart_id in metadata["timed_refresh_immune_slices"]:
+                    metadata["timed_refresh_immune_slices"].remove(chart_id)

Review Comment:
   **Suggestion:** The code removes `chart_id` from 
`timed_refresh_immune_slices` by checking membership with the integer 
`chart_id`, but stored values may be strings (or mixed types); this causes 
stale references to remain. Normalize/comparison should use string equality or 
rebuild the list excluding the chart id in either string or numeric form. 
[logic error]
   
   **Severity Level:** Minor ⚠️
   ```suggestion
                   trs = metadata.get("timed_refresh_immune_slices") or []
                   # Remove any entries that match the chart id either as 
number or string
                   new_trs = [v for v in trs if str(v) != str(chart_id)]
                   if len(new_trs) != len(trs):
                       metadata["timed_refresh_immune_slices"] = new_trs
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The current code assumes stored IDs are ints and will miss entries stored as 
strings (or mixed types). Rebuilding the list using string-normalized 
comparison is a safe, correct approach to remove all matching entries and 
avoids subtle stale references.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** superset/commands/chart/delete.py
   **Line:** 113:114
   **Comment:**
        *Logic Error: The code removes `chart_id` from 
`timed_refresh_immune_slices` by checking membership with the integer 
`chart_id`, but stored values may be strings (or mixed types); this causes 
stale references to remain. Normalize/comparison should use string equality or 
rebuild the list excluding the chart id in either string or numeric form.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



##########
tests/unit_tests/dashboards/commands/importers/v1/utils_test.py:
##########
@@ -199,3 +199,131 @@ def 
test_update_id_refs_cross_filter_handles_string_excluded():
     fixed = update_id_refs(config, chart_ids, dataset_info)
     # Should not raise and should remap key
     assert "1" in fixed["metadata"]["chart_configuration"]
+
+
+def test_update_id_refs_expanded_slices_with_missing_chart():
+    """
+    Test that missing charts in expanded_slices are gracefully skipped.
+
+    When a chart is deleted from a workspace, dashboards may still contain
+    references to the deleted chart ID in expanded_slices metadata. This test
+    verifies that the import process skips missing chart references instead of
+    raising a KeyError.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "expanded_slices": {
+                "101": True,  # This chart exists in the import
+                "102": False,  # This chart was deleted and doesn't exist
+                "103": True,  # Another deleted chart
+            },
+        },
+    }
+    chart_ids = {"uuid1": 1}  # Only uuid1 exists in the import
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # Should only include the existing chart, missing charts are skipped
+    assert fixed["metadata"]["expanded_slices"] == {"1": True}
+    # Should not raise KeyError for missing charts 102 and 103
+
+
+def test_update_id_refs_timed_refresh_immune_slices_with_missing_chart():
+    """
+    Test that missing charts in timed_refresh_immune_slices are gracefully 
skipped.
+
+    When a chart is deleted from a workspace, dashboards may still contain
+    references to the deleted chart ID in timed_refresh_immune_slices metadata.
+    This test verifies that the import process skips missing chart references
+    instead of raising a KeyError.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+            "CHART2": {
+                "id": "CHART2",
+                "meta": {"chartId": 102, "uuid": "uuid2"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "timed_refresh_immune_slices": [
+                101,  # This chart exists
+                102,  # This chart exists
+                103,  # This chart was deleted and doesn't exist
+                104,  # Another deleted chart
+            ],
+        },
+    }
+    chart_ids = {"uuid1": 1, "uuid2": 2}  # Only uuid1 and uuid2 exist
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # Should only include existing charts, missing charts are skipped
+    assert fixed["metadata"]["timed_refresh_immune_slices"] == [1, 2]
+    # Should not raise KeyError for missing charts 103 and 104
+
+
+def test_update_id_refs_multiple_missing_chart_references():
+    """
+    Test that multiple metadata fields with missing charts are all handled 
gracefully.
+
+    This comprehensive test verifies that all metadata fields properly skip
+    missing chart references during import.
+    """
+    from superset.commands.dashboard.importers.v1.utils import update_id_refs
+
+    config = {
+        "position": {
+            "CHART1": {
+                "id": "CHART1",
+                "meta": {"chartId": 101, "uuid": "uuid1"},
+                "type": "CHART",
+            },
+        },
+        "metadata": {
+            "expanded_slices": {
+                "101": True,
+                "999": False,  # Missing chart
+            },
+            "timed_refresh_immune_slices": [101, 999],  # 999 is missing
+            "filter_scopes": {
+                "101": {"region": {"immune": [999]}},  # 999 is missing
+                "999": {"region": {"immune": [101]}},  # Key 999 is missing
+            },
+            "default_filters": '{"101": {"col": "value"}, "999": {"col": 
"other"}}',
+            "native_filter_configuration": [
+                {"scope": {"excluded": [101, 999]}}  # 999 is missing
+            ],
+        },
+    }
+    chart_ids = {"uuid1": 1}  # Only uuid1 exists
+    dataset_info: dict[str, dict[str, Any]] = {}
+
+    fixed = update_id_refs(config, chart_ids, dataset_info)
+
+    # All missing chart references should be gracefully skipped
+    assert fixed["metadata"]["expanded_slices"] == {"1": True}
+    assert fixed["metadata"]["timed_refresh_immune_slices"] == [1]
+    assert fixed["metadata"]["filter_scopes"] == {"1": {"region": {"immune": 
[]}}}
+    assert fixed["metadata"]["default_filters"] == '{"1": {"col": "value"}}'
+    assert fixed["metadata"]["native_filter_configuration"] == [
+        {"scope": {"excluded": [1]}}
+    ]

Review Comment:
   **Suggestion:** The test asserts exact structural equality for 
`native_filter_configuration` which is brittle and can break if the list 
ordering or extra non-essential keys change; assert the meaningful parts 
directly (that the scope excluded list was remapped) to make the test robust. 
[possible bug]
   
   **Severity Level:** Critical 🚨
   ```suggestion
       nf_config = fixed["metadata"].get("native_filter_configuration", [])
       assert isinstance(nf_config, list)
       # verify that the first native filter has the expected remapped excluded 
list
       assert nf_config and nf_config[0].get("scope", {}).get("excluded") == [1]
   ```
   <details>
   <summary><b>Why it matters? ⭐ </b></summary>
   
   The test is concerned with verifying that the excluded IDs were remapped,
   not with asserting the entire structure and ordering of the native filter
   configuration. Relaxing the assertion to check the meaningful part (the
   remapped excluded list) makes the test less brittle while preserving the
   intent of the assertion.
   </details>
   <details>
   <summary><b>Prompt for AI Agent 🤖 </b></summary>
   
   ```mdx
   This is a comment left during a code review.
   
   **Path:** tests/unit_tests/dashboards/commands/importers/v1/utils_test.py
   **Line:** 327:329
   **Comment:**
        *Possible Bug: The test asserts exact structural equality for 
`native_filter_configuration` which is brittle and can break if the list 
ordering or extra non-essential keys change; assert the meaningful parts 
directly (that the scope excluded list was remapped) to make the test robust.
   
   Validate the correctness of the flagged issue. If correct, How can I resolve 
this? If you propose a fix, implement it and please make it concise.
   ```
   </details>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to