Manno15 edited a comment on issue #1664: URL: https://github.com/apache/accumulo/issues/1664#issuecomment-694431085
It's definitely possible that I could have done things more optimal. I was told a good way to produce a lot of delete candidates was to create a pre-split table (I chose 75k), ingest some data, compact the table, clone and then delete one of them. From then on, I just had to compact the table to keep reproducing the delete candidates. This did work well but it did also take a decent amount of time between test runs. Another part of the issue is a couple of the laptops in my cluster are very old hardware and have tendencies to crash even when they're idle. I haven't tried your method so maybe the way I did it is more convoluted and taxing on the machines. I can look into doing that tomorrow to see if I can get better results. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
