gerlowskija commented on code in PR #53:
URL: https://github.com/apache/solr-sandbox/pull/53#discussion_r1161731667
##########
SolrAndKafkaIntegrationTest.java:
##########
@@ -0,0 +1,335 @@
+package org.apache.solr.crossdc;
Review Comment:
[Q] I'm assuming this is just a temp file you copied to the root of the repo
while debugging?
I spent a good 10 minutes trying to figure out why my copy of
SolrAndKafkaIntegrationTest didn't have all the commented out methods in this
file, and then I noticed that this copy is at the repo root and not the file
that my IDE was pulling up 😬
##########
crossdc-producer/src/test/java/org/apache/solr/crossdc/SolrAndKafkaIntegrationTest.java:
##########
@@ -210,19 +211,25 @@ public void testProducerToCloud() throws Exception {
@Test
public void testMirroringUpdateProcessor() throws Exception {
final SolrInputDocument tooLargeDoc = new SolrInputDocument();
- tooLargeDoc.addField("id", "tooLarge-" +
String.valueOf(System.currentTimeMillis()));
+ tooLargeDoc.addField("id", System.nanoTime());
tooLargeDoc.addField("text", new String(new byte[2 * MAX_DOC_SIZE_BYTES]));
final SolrInputDocument normalDoc = new SolrInputDocument();
- normalDoc.addField("id", "normalDoc-" +
String.valueOf(System.currentTimeMillis()));
+ normalDoc.addField("id", System.nanoTime());
normalDoc.addField("text", "Hello world");
- final List<SolrInputDocument> docsToIndex = new ArrayList<>();
- docsToIndex.add(tooLargeDoc);
+ List<SolrInputDocument> docsToIndex = new ArrayList<>();
docsToIndex.add(normalDoc);
+ docsToIndex.add(tooLargeDoc);
final CloudSolrClient cluster1Client = solrCluster1.getSolrClient();
- cluster1Client.add(docsToIndex);
+ try {
+ cluster1Client.add(docsToIndex);
+ } catch (BaseCloudSolrClient.RouteException e) {
+ // expected
Review Comment:
[Q] Why is a RouteException expected here? If it's actually
expected/desired shouldn't we assert to make sure it looks the way we expect?
##########
crossdc-producer/src/test/java/org/apache/solr/crossdc/SolrAndKafkaIntegrationTest.java:
##########
@@ -64,19 +62,24 @@
protected static volatile MiniSolrCloudCluster solrCluster1;
protected static volatile MiniSolrCloudCluster solrCluster2;
- protected static volatile Consumer consumer = new Consumer();
+ protected static volatile Consumer consumer;
private static String TOPIC = "topic1";
private static String COLLECTION = "collection1";
private static String ALT_COLLECTION = "collection2";
+ private static Thread.UncaughtExceptionHandler uceh;
@BeforeClass
Review Comment:
[-1/Q] Each test case in this class relies pretty heavily on the assumption
that when the test-method starts, COLLECTION will have 0 docs on both primary
and secondary.
Won't moving the Solr cluster creation/deletion logic from Before/After to
BeforeClass/Afterclass break that assumption everywhere? Or am I overlooking
somewhere that deletes-all-docs or deletes/recreates collections in between
test methods?
##########
crossdc-producer/src/test/java/org/apache/solr/crossdc/SolrAndKafkaIntegrationTest.java:
##########
@@ -210,19 +211,25 @@ public void testProducerToCloud() throws Exception {
@Test
public void testMirroringUpdateProcessor() throws Exception {
final SolrInputDocument tooLargeDoc = new SolrInputDocument();
- tooLargeDoc.addField("id", "tooLarge-" +
String.valueOf(System.currentTimeMillis()));
+ tooLargeDoc.addField("id", System.nanoTime());
Review Comment:
[Q] What's the purpose of the doc-id change here - were you running into
conflicts? If you don't find it useful, fair enough, but I found the
"tooLarge-" and "normalDoc-" prefixes here useful for debugging when I
initially wrote this test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]