Github user sureshthalamati commented on the issue:

    https://github.com/apache/spark/pull/18994
  
    Thank you very much for reviewing @gatorsmile 
    By scanning through the current supported DDL syntax for non-partition 
columns,  I think following DDL statements will impact informational 
constraints:
    
    **ALTER STATEMENTS**
    ```sql
    ALTER TABLE name RENAME TO new_name
    ALTER TABLE name CHANGE column_name new_name new_type
    ```
    Spark SQL  can raise errors if the 
    informational constraints are defined on the affected columns and let the 
user drop constraints before proceeding with the DDL.  In the future we can 
enhance the affected DDL's to automatically fix up the constraint definition 
when possible, and not raise error
    
    When spark adds support for  DROP/REPLACE of  columns they will  impact 
informational constraints.
    ```sql
    ALTER TABLE name DROP [COLUMN] column_name
    ALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...])
    ```
    **DROP TABLE**
    ```sql
    DROP TABLE name
    ``` 
    Hive drops the referential constraints automatically.  Oracle requires user 
specify _[CASCADE CONSTRAINTS]_ clause to automatically drop the referential 
constraints, otherwise raises the error.  Should we stick to the Hive behavior ?
    
    Fixing the affected DDL’s  requires carrying additional dependency 
information as part of  storing primary key definition,  Is it ok if I fix  the 
affected DDLS in  a separate PR ? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to