Copilot commented on code in PR #4188:
URL: https://github.com/apache/solr/pull/4188#discussion_r2884975163
##########
solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:
##########
@@ -474,13 +474,28 @@ protected void validateTypeChange(String configSet,
SchemaField field, FieldType
void deleteStoredSampleDocs(String configSet) {
try {
+ ensureSystemCollectionExists();
cloudClient().deleteByQuery(BLOB_STORE_ID, "id:" + configSet +
"_sample/*", 10);
} catch (IOException | SolrServerException | SolrException exc) {
Review Comment:
`deleteStoredSampleDocs` now auto-creates `.system` before attempting a
delete. If `.system` is missing, deletion is effectively a no-op, but this will
instead create a new system collection during cleanup flows (e.g.,
`SchemaDesignerAPI.cleanupTemp`). Consider skipping
`ensureSystemCollectionExists()` here and treating missing collection/blob as
nothing to delete (similar to `getStoredSampleDocs`).
##########
solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:
##########
@@ -474,13 +474,28 @@ protected void validateTypeChange(String configSet,
SchemaField field, FieldType
void deleteStoredSampleDocs(String configSet) {
try {
+ ensureSystemCollectionExists();
cloudClient().deleteByQuery(BLOB_STORE_ID, "id:" + configSet +
"_sample/*", 10);
} catch (IOException | SolrServerException | SolrException exc) {
final String excStr = exc.toString();
log.warn("Failed to delete sample docs from blob store for {} due to:
{}", configSet, excStr);
}
}
+ private void ensureSystemCollectionExists() throws IOException,
SolrServerException {
+ if (!zkStateReader().getClusterState().hasCollection(BLOB_STORE_ID)) {
+ log.info("Creating {} collection for blob storage", BLOB_STORE_ID);
+ CollectionAdminRequest.createCollection(BLOB_STORE_ID, null, 1,
1).process(cloudClient());
+ try {
Review Comment:
`ensureSystemCollectionExists()` does a check-then-create without handling
the race where another thread/node creates `.system` between the
`hasCollection` check and the CREATE request. In that case
`createCollection(...).process(...)` can throw even though the desired end
state (collection exists) is already true. Consider catching the specific
"already exists" failure (or re-checking cluster state after a failure) and
treating it as success.
```suggestion
try {
CollectionAdminRequest.createCollection(BLOB_STORE_ID, null, 1,
1).process(cloudClient());
} catch (SolrServerException | IOException e) {
// Handle race where another node created the collection between the
hasCollection()
// check above and the createCollection() call. If the collection
now exists, treat
// this as success; otherwise, propagate the original failure.
if (zkStateReader().getClusterState().hasCollection(BLOB_STORE_ID)) {
if (log.isInfoEnabled()) {
log.info(
"Collection {} already exists after failed create attempt;
treating as success",
BLOB_STORE_ID);
}
return;
}
throw e;
}
try {
```
##########
solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:
##########
@@ -496,12 +511,23 @@ List<SolrInputDocument> getStoredSampleDocs(final String
configSet) throws IOExc
} else return Collections.emptyList();
} catch (SolrServerException e) {
throw new IOException("Failed to lookup stored docs for " + configSet +
" due to: " + e);
+ } catch (SolrException e) {
+ // Collection not found or blob not found - treat as no documents stored
+ if (log.isDebugEnabled()) {
+ log.debug("No stored sample docs found for {}", configSet, e);
+ }
+ return Collections.emptyList();
} finally {
IOUtils.closeQuietly(inputStream);
}
}
void storeSampleDocs(final String configSet, List<SolrInputDocument> docs)
throws IOException {
+ try {
+ ensureSystemCollectionExists();
+ } catch (SolrServerException e) {
+ throw new IOException("Failed to ensure .system collection exists", e);
+ }
Review Comment:
This PR adds runtime creation of `.system`, but the existing schema designer
tests always pre-create the blob store collection in `@BeforeClass`, so the new
behavior isn't exercised. Consider updating/adding a test that starts without
`.system` and asserts that `storeSampleDocs` (and/or other blob interactions)
creates it successfully.
##########
solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:
##########
@@ -474,13 +474,28 @@ protected void validateTypeChange(String configSet,
SchemaField field, FieldType
void deleteStoredSampleDocs(String configSet) {
try {
+ ensureSystemCollectionExists();
cloudClient().deleteByQuery(BLOB_STORE_ID, "id:" + configSet +
"_sample/*", 10);
} catch (IOException | SolrServerException | SolrException exc) {
final String excStr = exc.toString();
log.warn("Failed to delete sample docs from blob store for {} due to:
{}", configSet, excStr);
}
}
+ private void ensureSystemCollectionExists() throws IOException,
SolrServerException {
+ if (!zkStateReader().getClusterState().hasCollection(BLOB_STORE_ID)) {
+ log.info("Creating {} collection for blob storage", BLOB_STORE_ID);
+ CollectionAdminRequest.createCollection(BLOB_STORE_ID, null, 1,
1).process(cloudClient());
Review Comment:
Creating the `.system` collection with a hard-coded replication factor of 1
may be less resilient than Solr's built-in auto-create path (e.g.,
`HttpSolrCall.autoCreateSystemColl` uses `min(3, liveNodes)` for replication
factor). Consider deriving the replication factor from live node count or
cluster defaults to avoid creating an under-replicated blob store collection on
multi-node clusters.
```suggestion
int liveNodes =
zkStateReader().getClusterState().getLiveNodes().size();
int replicationFactor = Math.max(1, Math.min(3, liveNodes));
CollectionAdminRequest.createCollection(BLOB_STORE_ID, null, 1,
replicationFactor)
.process(cloudClient());
```
##########
solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:
##########
@@ -496,12 +511,23 @@ List<SolrInputDocument> getStoredSampleDocs(final String
configSet) throws IOExc
} else return Collections.emptyList();
} catch (SolrServerException e) {
throw new IOException("Failed to lookup stored docs for " + configSet +
" due to: " + e);
+ } catch (SolrException e) {
+ // Collection not found or blob not found - treat as no documents stored
+ if (log.isDebugEnabled()) {
+ log.debug("No stored sample docs found for {}", configSet, e);
+ }
+ return Collections.emptyList();
Review Comment:
The new `catch (SolrException)` returns an empty list for *all*
SolrException cases. That can mask real server-side problems (e.g., 401/403
permission issues or 500 errors) and make debugging harder. Consider only
swallowing NOT_FOUND-style cases (e.g., `e.code() == ErrorCode.NOT_FOUND.code`)
and rethrowing/wrapping other SolrExceptions.
```suggestion
if (e.code() == ErrorCode.NOT_FOUND.code) {
if (log.isDebugEnabled()) {
log.debug("No stored sample docs found for {}", configSet, e);
}
return Collections.emptyList();
}
// For other SolrExceptions, propagate as an IOException to avoid
hiding real server problems
throw new IOException("Failed to lookup stored docs for " + configSet,
e);
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]