dawidwys commented on a change in pull request #12335:
URL: https://github.com/apache/flink/pull/12335#discussion_r435951724



##########
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
##########
@@ -490,8 +490,11 @@ private void insertIntoInternal(UnresolvedIdentifier 
unresolvedIdentifier, Table
        private Optional<CatalogQueryOperation> 
scanInternal(UnresolvedIdentifier identifier) {
                ObjectIdentifier tableIdentifier = 
catalogManager.qualifyIdentifier(identifier);
 
-               return catalogManager.getTable(tableIdentifier)
-                       .map(t -> new CatalogQueryOperation(tableIdentifier, 
t.getTable().getSchema()));
+               return catalogManager.getTable(tableIdentifier).map(t -> {
+                       CatalogTableSchemaResolver resolver = new 
CatalogTableSchemaResolver(parser);

Review comment:
       I had a look at #12260 and my feeling is even stronger that fix it 
everywhere but not where it actually should be fixed. IMO if we fixed it when 
the Table is created and have the proper types from the very beginning we would 
not need to patch it everywhere.
   
   I have the feeling that the problem here is that the type we store in the 
catalog is wrong. I looked into it and my take is that the TimestampType#kind 
should also be stored in the catalog and this would fix most of the problems. 
Right now we are basically stripping that information when storing it in 
catalog. I am not 100% sure if we should store the Time attribute property, but 
I think it would fix imo hacks in this PR and #12260. I'd like to know what 
@twalthr thinks about it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to