JeetKunDoug commented on code in PR #13:
URL:
https://github.com/apache/cassandra-analytics/pull/13#discussion_r1287708124
##########
cassandra-four-zero/src/main/java/org/apache/cassandra/bridge/SSTableWriterImplementation.java:
##########
@@ -80,4 +79,28 @@ public void close() throws IOException
{
writer.close();
}
+
+ @VisibleForTesting
+ static CQLSSTableWriter.Builder configureBuilder(String inDirectory,
+ String createStatement,
+ String insertStatement,
+ RowBufferMode
rowBufferMode,
+ int bufferSizeMB,
+ IPartitioner
cassPartitioner)
+ {
+ CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder()
+
.inDirectory(inDirectory)
+
.forTable(createStatement)
+
.withPartitioner(cassPartitioner)
+
.using(insertStatement);
+ if (rowBufferMode == RowBufferMode.UNBUFFERED)
+ {
+ builder.sorted();
+ }
+ else if (rowBufferMode == RowBufferMode.BUFFERED)
+ {
+ builder.withBufferSizeInMB(bufferSizeMB);
Review Comment:
I think, for now, we leave this just a configuration option... Validating it
given the Spark environment and picking a reasonable upper-bound would take
some experimentation (and would likely involve more than just the
executor.memory setting, as there are other config settings that deal w/ memory
like memory overhead).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]