cshannon commented on code in PR #4133:
URL: https://github.com/apache/accumulo/pull/4133#discussion_r1450416089


##########
test/src/main/java/org/apache/accumulo/test/functional/SplitIT.java:
##########
@@ -92,6 +93,7 @@ protected Duration defaultTimeout() {
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration 
hadoopCoreSite) {
     cfg.setProperty(Property.TSERV_MAXMEM, "5K");
+    cfg.setMemory(ServerType.TABLET_SERVER, 384, MemoryUnit.MEGABYTE);

Review Comment:
   This was pretty weird, it was happening during scans according to the logs. 
It certainly seems related to more data being written and I agree that it 
should be investigated more. One option to figure it out would be to enable the 
JVM to dump the heap file on OOM and then we can analyze what's in the file 
using something like the Eclipse MAT



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to