dosubot[bot] commented on issue #19222:
URL: https://github.com/apache/superset/issues/19222#issuecomment-2137760898

   <!-- Greeting -->
   @rusackas Hello! I'm here to help you with any bugs, questions, or 
contributions while you wait for a human maintainer. Feel free to ask me 
anything. [Dosu](https://dosu.dev)
   
   <!-- Answer -->
   The error you're encountering during the dashboard import process in Apache 
Superset is likely due to issues with the `filter_scopes` in the dashboard's 
metadata. Specifically, the error `KeyError: 580` indicates that the import 
process is trying to access a key in the `filter_scopes` dictionary that 
doesn't exist.
   
   To resolve this issue, you can try the following steps:
   
   1. **Check the `filter_scopes` in the Exported File**:
      - Ensure that the `filter_scopes` in the exported dashboard file are 
correctly defined and do not reference any non-existent keys.
   
   2. **Use the `ImportDashboardsCommand` Class**:
      - The `ImportDashboardsCommand` class in Superset handles the import 
process and ensures that the dashboard metadata and position references are 
correctly mapped to the new IDs. This class also handles the conversion of 
filter scopes and other necessary adjustments.
   
   Here is a relevant code snippet from the `ImportDashboardsCommand` class 
that shows how it handles the import process:
   
   ```python
   class ImportDashboardsCommand(ImportModelsCommand):
       """Import dashboards"""
   
       dao = DashboardDAO
       model_name = "dashboard"
       prefix = "dashboards/"
       schemas: dict[str, Schema] = {
           "charts/": ImportV1ChartSchema(),
           "dashboards/": ImportV1DashboardSchema(),
           "datasets/": ImportV1DatasetSchema(),
           "databases/": ImportV1DatabaseSchema(),
       }
       import_error = DashboardImportError
   
       @staticmethod
       def _import(configs: dict[str, Any], overwrite: bool = False) -> None:
           # discover charts and datasets associated with dashboards
           chart_uuids: set[str] = set()
           dataset_uuids: set[str] = set()
           for file_name, config in configs.items():
               if file_name.startswith("dashboards/"):
                   chart_uuids.update(find_chart_uuids(config["position"]))
                   dataset_uuids.update(
                       find_native_filter_datasets(config.get("metadata", {}))
                   )
   
           # discover datasets associated with charts
           for file_name, config in configs.items():
               if file_name.startswith("charts/") and config["uuid"] in 
chart_uuids:
                   dataset_uuids.add(config["dataset_uuid"])
   
           # discover databases associated with datasets
           database_uuids: set[str] = set()
           for file_name, config in configs.items():
               if file_name.startswith("datasets/") and config["uuid"] in 
dataset_uuids:
                   database_uuids.add(config["database_uuid"])
   
           # import related databases
           database_ids: dict[str, int] = {}
           for file_name, config in configs.items():
               if file_name.startswith("databases/") and config["uuid"] in 
database_uuids:
                   database = import_database(config, overwrite=False)
                   database_ids[str(database.uuid)] = database.id
   
           # import datasets with the correct parent ref
           dataset_info: dict[str, dict[str, Any]] = {}
           for file_name, config in configs.items():
               if (
                   file_name.startswith("datasets/")
                   and config["database_uuid"] in database_ids
               ):
                   config["database_id"] = database_ids[config["database_uuid"]]
                   dataset = import_dataset(config, overwrite=False)
                   dataset_info[str(dataset.uuid)] = {
                       "datasource_id": dataset.id,
                       "datasource_type": dataset.datasource_type,
                       "datasource_name": dataset.table_name,
                   }
   
           # import charts with the correct parent ref
           charts = []
           chart_ids: dict[str, int] = {}
           for file_name, config in configs.items():
               if (
                   file_name.startswith("charts/")
                   and config["dataset_uuid"] in dataset_info
               ):
                   # update datasource id, type, and name
                   dataset_dict = dataset_info[config["dataset_uuid"]]
                   config.update(dataset_dict)
                   dataset_uid = 
f"{dataset_dict['datasource_id']}__{dataset_dict['datasource_type']}"
                   config["params"].update({"datasource": dataset_uid})
                   if "query_context" in config:
                       config["query_context"] = None
   
                   chart = import_chart(config, overwrite=False)
                   charts.append(chart)
                   chart_ids[str(chart.uuid)] = chart.id
   
           # store the existing relationship between dashboards and charts
           existing_relationships = db.session.execute(
               select([dashboard_slices.c.dashboard_id, 
dashboard_slices.c.slice_id])
           ).fetchall()
   
           # import dashboards
           dashboards: list[Dashboard] = []
           dashboard_chart_ids: list[tuple[int, int]] = []
           for file_name, config in configs.items():
               if file_name.startswith("dashboards/"):
                   config = update_id_refs(config, chart_ids, dataset_info)
                   dashboard = import_dashboard(config, overwrite=overwrite)
                   dashboards.append(dashboard)
                   for uuid in find_chart_uuids(config["position"]):
                       if uuid not in chart_ids:
                           break
                       chart_id = chart_ids[uuid]
                       if (dashboard.id, chart_id) not in 
existing_relationships:
                           dashboard_chart_ids.append((dashboard.id, chart_id))
   
           # set ref in the dashboard_slices table
           values = [
               {"dashboard_id": dashboard_id, "slice_id": chart_id}
               for (dashboard_id, chart_id) in dashboard_chart_ids
           ]
           db.session.execute(dashboard_slices.insert(), values)
   
           # Migrate any filter-box charts to native dashboard filters.
           for dashboard in dashboards:
               migrate_dashboard(dashboard)
   
           # Remove all obsolete filter-box charts.
           for chart in charts:
               if chart.viz_type == "filter_box":
                   db.session.delete(chart)
   ```
   
   This class will help you update the dashboard metadata to use the new IDs, 
ensuring that the references are correct and preventing errors related to 
missing or incorrect parameters. Additionally, it will handle the migration of 
filter-box charts to native dashboard filters, which is necessary due to the 
removal of the Filter Box code in version 4.0.0 
<sup>[[1]](https://github.com/apache/superset/blob/master/superset/commands/dashboard/importers/v1/utils.py)</sup><sup>[[2]](https://github.com/apache/superset/blob/master/superset/commands/dashboard/importers/v0.py)</sup><sup>[[3]](https://github.com/apache/superset/blob/master/superset/commands/dashboard/importers/v1/__init__.py)</sup><sup>[[4]](https://github.com/apache/superset/blob/master/CHANGELOG/4.0.0.md)</sup>.
   
   
   <!-- Dosu Comment Footer -->
   <details open>
   <summary>
   
   #### Details
   </summary>
   
   **About Dosu**
   This response is meant to be useful and save you time. It is **not meant to 
be a precise solution**, but rather a starting point for your own research.
   
   Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is 
irrelevant.
   </details>
   
   *To continue the conversation, mention @dosu.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to