Hi folks,

When sstableloader hits a very large sstable cassandra may end up logging a
message like this:

INFO  [pool-1-thread-4] 2020-02-08 01:35:37,946 NoSpamLogger.java:91 -
Maximum memory usage reached (536870912), cannot allocate chunk of 1048576

The loading process doesn't abort, and the sstableloader stdout logging appears
to end up reporting success, e.g., with a few 100% totals across the nodes
reported:

progress: [/10.0.1.116]0:11/11 100% [/10.0.1.248]0:11/11 100%
[/10.0.1.93]0:11/11
100% total: 100% 0.000KiB/s (avg: 36.156MiB/s)
progress: [/10.0.1.116]0:11/11 100% [/10.0.1.248]0:11/11 100%
[/10.0.1.93]0:11/11
100% total: 100% 0.000KiB/s (avg: 34.914MiB/s)
progress: [/10.0.1.116]0:11/11 100% [/10.0.1.248]0:11/11 100%
[/10.0.1.93]0:11/11
100% total: 100% 0.000KiB/s (avg: 33.794MiB/s)

Summary statistics:
   Connections per host    : 1
   Total files transferred : 33
   Total bytes transferred : 116.027GiB
   Total duration          : 3515748 ms
   Average transfer rate   : 33.794MiB/s
   Peak transfer rate      : 53.130MiB/s

In these situations is sstableloader hitting the memory issue and then
retrying a few times until it succeeds?  Or is it silently dropping data on
the floor?  I'd assume the former, but thought it'd be good to ask you
folks to be sure...

Jim

Reply via email to