pvary commented on pull request #3269:
URL: https://github.com/apache/iceberg/pull/3269#issuecomment-940682984


   > I agree with @rdblue that we should allow time travel query using table 
name. It's not likely for `AS OF` to be supported by all query engines and 
platforms in the near future.
   
   FWIW, Hive and Impala added this feature for Iceberg table using the 
standard format. I understand the spacial way Spark handles features. What 
other engines do have problems with implementing the standard and need to use 
the dot based naming format? 
   
   > But there is always a use case for using table name to perform time 
travel, because there are applications that user only have control of some 
parts of the SQL such as table name but do not #have control over the entire 
SQL.
   
   For Hive it was much harder to add the metadata table support, where the 
metadata table name is delimited by a dot after the real table name.  
   
   I was always envious of the Spark SQL extension feature. Could we add our 
own extension to implement time travel with format defined by the standard?
   
   I fear that if we start using `.` to separate the metadata table name, the 
snapshot, the branch, the tags, it will be hard to follow which one the user 
wants to refer to. We have to limit the usable branch and tag names, and also 
this limits the user to always use the full path to the tables and limits the 
naming hierarchy to the same depth. 
   
   If we must use identifier names to express these features I would prefer to 
use something specific for the feature. Like `@` for tme travel, and `#` for 
the tags, so it is easier to dechiper the meaning for the user and for the 
parser as well. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to