yuqi1129 commented on code in PR #9430:
URL: https://github.com/apache/gravitino/pull/9430#discussion_r2660672284


##########
core/src/main/java/org/apache/gravitino/storage/relational/service/FilesetMetaService.java:
##########
@@ -125,15 +119,37 @@ public FilesetEntity 
getFilesetByIdentifier(NameIdentifier identifier) {
   public List<FilesetEntity> listFilesetsByNamespace(Namespace namespace) {
     NamespaceUtil.checkFileset(namespace);
 
+    List<FilesetPO> filesetPOs = listFilesetPOs(namespace);
+    return POConverters.fromFilesetPOs(filesetPOs, namespace);
+  }
+
+  private List<FilesetPO> listFilesetPOs(Namespace namespace) {
+    return filesetListFetcher().apply(namespace);
+  }
+
+  private List<FilesetPO> listFilesetPOsBySchemaId(Namespace namespace) {

Review Comment:
   > Another thing is that back to origin, the use of schemaId in cache is to 
accelerate and avoid multiple db access, if we introduce this change, do you 
think we still need to keep the id cache mechanism in the storage module?
   
   Why do we need to keep two paths?
   
   If cache is enabled, we can reuse the `schemaId` when trying to load tables 
under this schema by a simple query without a complicated join. A query that 
joins 4 or 5 tables will take much time compared to a simple one, though they 
are both a single query. If this difference can be ignored, we can freely 
remove the old code path. I believe it should not be neglected. 
   
   > If it needs a different code implementation to improve the performance (no 
other solution). It is better to use some design patterns to better separate 
different implementations. I'm not a fan of adding if...else everywhere; this 
is really not a good design practice.
   
   This makes sense. If we reach an agreement that we need to keep both, the 
code can be optimized later. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to