betodealmeida commented on a change in pull request #16859:
URL: https://github.com/apache/superset/pull/16859#discussion_r717008867



##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       I think we need to understand the problem better.
   
   Why is the backend checking for uniqueness when **updating** columns in a 
dataset? If `database_id`, `schema` and `table_name` didn't change then we 
should have exited in line 1672. Are any of those attributes changing when 
overwriting a dataset from SQL Lab?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       I think we need to understand the problem better.
   
   Why is the backend checking for uniqueness when **updating columns** in a 
dataset? If `database_id`, `schema` and `table_name` didn't change then we 
should have exited in line 1672. Are any of those attributes changing when 
overwriting a dataset from SQL Lab?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       The uniqueness test checks that we're not changing the database ID, 
schema or table name of an existing dataset to something that already exists.
   
   If you're doing an update to the dataset, and neither of those 3 fields have 
changed, you should reach the `else` part of the `for` loop, exiting this 
function:
   
   ```
   for attr in ["database_id", "schema", "table_name"]:
       history = state.get_history(attr, True)
       if history.has_changes():
           break
   else:
       return None  # <= here
   ```
   
   If the they are not changing, then `history = state.get_history(attr, True)` 
should always be a false value. Can you add some debugging to see why 
`history.has_changes()` is true?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       The uniqueness test checks that we're not changing the database ID, 
schema or table name of an existing dataset to something that already exists.
   
   If you're doing an update to the dataset, and neither of those 3 fields have 
changed, you should reach the `else` part of the `for` loop, exiting this 
function:
   
   ```python
   for attr in ["database_id", "schema", "table_name"]:
       history = state.get_history(attr, True)
       if history.has_changes():
           break
   else:
       return None  # <= here
   ```
   
   If the they are not changing, then `history = state.get_history(attr, True)` 
should always be a false value. Can you add some debugging to see why 
`history.has_changes()` is true?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       Why are there no columns in the payload when you try to overwrite the 
dataset?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1673,7 +1673,7 @@ def before_update(
 
         if not DatasetDAO.validate_uniqueness(
             target.database_id, target.schema, target.table_name
-        ):
+        ) and hasattr(target, "columns"):

Review comment:
       Do we create the dataset without a DB and add it later?

##########
File path: superset/connectors/sqla/models.py
##########
@@ -1662,9 +1663,9 @@ def before_update(
 
         # Check whether the relevant attributes have changed.
         state = db.inspect(target)  # pylint: disable=no-member
-
-        for attr in ["database_id", "schema", "table_name"]:
+        for attr in ["schema", "table_name"]:

Review comment:
       We shouldn't remove `database_id` here, otherwise someone might create a 
dataset with the same database ID, schema and table name. We should figure out 
why database ID is coming as null when updating.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to