fredia commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r1629027426
##########
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStWriteBatchOperation.java:
##########
@@ -55,22 +54,44 @@ public class ForStWriteBatchOperation implements
ForStDBOperation {
public CompletableFuture<Void> process() {
return CompletableFuture.runAsync(
() -> {
- try (WriteBatch writeBatch =
- new WriteBatch(batchRequest.size() *
PER_RECORD_ESTIMATE_BYTES)) {
+ try (ForStDBWriteBatchWrapper writeBatch =
+ new ForStDBWriteBatchWrapper(db, writeOptions,
batchRequest.size())) {
for (ForStDBPutRequest<?, ?> request : batchRequest) {
+ ColumnFamilyHandle cf =
request.getColumnFamilyHandle();
if (request.valueIsNull()) {
- // put(key, null) == delete(key)
- writeBatch.delete(
- request.getColumnFamilyHandle(),
- request.buildSerializedKey());
+ if (request instanceof ForStDBBunchPutRequest)
{
+ ForStDBBunchPutRequest<?> bunchPutRequest =
+ (ForStDBBunchPutRequest<?>)
request;
+ byte[] primaryKey =
bunchPutRequest.buildSerializedKey(null);
+ byte[] endKey =
ForStDBBunchPutRequest.nextBytes(primaryKey);
Review Comment:
I found that `deleteRange` may be ambiguous in some cases, such as when the
primary key is an array of `Byte. MAX_VALUE `, so I changed it to iterator and
delete one by one.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]