singhpk234 commented on code in PR #15368:
URL: https://github.com/apache/iceberg/pull/15368#discussion_r2829248949
##########
core/src/main/java/org/apache/iceberg/rest/CatalogHandlers.java:
##########
@@ -116,6 +119,9 @@ public class CatalogHandlers {
private static final InMemoryPlanningState IN_MEMORY_PLANNING_STATE =
InMemoryPlanningState.getInstance();
private static final ExecutorService ASYNC_PLANNING_POOL =
Executors.newSingleThreadExecutor();
+ private static final List<Credential> DUMMY_STORAGE_CREDENTIALS =
+ ImmutableList.of(
+ ImmutableCredential.builder().prefix("dummy").putConfig("dummyKey",
"dummyVal").build());
Review Comment:
This is not a test class, is it a good idea to inject dummy key values ?
I don;t have a cleaner E2E setup scenario in my mind though
##########
core/src/main/java/org/apache/iceberg/rest/RESTTable.java:
##########
@@ -75,11 +79,20 @@ public TableScan newScan() {
tableIdentifier,
resourcePaths,
supportedEndpoints,
- io(),
+ ioReference,
catalogProperties,
hadoopConf);
}
+ @Override
+ public FileIO io() {
Review Comment:
I am little bit nervous on multi-threaded scenarios, i understand there are
places where we directly refer the `table.io()` (spark executors that i can
think of), I wonder if we should fail for now saying illegal state
to avoid these scenario :
Thread 1 :
table.newScan()......planFIles()
table.io().readFile()
Thread2:
table.newScan().....planFiles()
table.io().readFile()
One of the thread might eventually fail, but how about when we do the
newScan()...planFiles() and we trying setting the counter then we say hey its
not null and not table.io() table level Fileio fail ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]