yuqi1129 commented on code in PR #9430:
URL: https://github.com/apache/gravitino/pull/9430#discussion_r2659485707


##########
core/src/main/java/org/apache/gravitino/storage/relational/service/FilesetMetaService.java:
##########
@@ -125,15 +119,37 @@ public FilesetEntity 
getFilesetByIdentifier(NameIdentifier identifier) {
   public List<FilesetEntity> listFilesetsByNamespace(Namespace namespace) {
     NamespaceUtil.checkFileset(namespace);
 
+    List<FilesetPO> filesetPOs = listFilesetPOs(namespace);
+    return POConverters.fromFilesetPOs(filesetPOs, namespace);
+  }
+
+  private List<FilesetPO> listFilesetPOs(Namespace namespace) {
+    return filesetListFetcher().apply(namespace);
+  }
+
+  private List<FilesetPO> listFilesetPOsBySchemaId(Namespace namespace) {

Review Comment:
   > if we introduce this change, do you think we still need to keep the id 
cache mechanism in the storage module?
   
   I think there is no need to store the scheme ID in the cache system if we 
try to use the `JOIN` sentence to fetch entities like tables, as all will be 
fetched from the database directly. 
   
   
   > The fix here introduced a new method to fetch the entities, which doubles 
the complexity. We should carefully think about how to increase the 
performance, while still keeping the simplicity. Otherwise, we will add more 
and more branches for different conditions, and make the code very hard to 
maintain.
   
   Yeah, I think so. The current changes only benefit if the cache is disabled 
and introduce some complexity. The problem is: once cache is enabled,  all 
optimizations will look like in vain, so the problem: Can we reuse some logic 
to make it work well in both ways without adding extra branches? 
   
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to