baohe-zhang commented on a change in pull request #29425:
URL: https://github.com/apache/spark/pull/29425#discussion_r481479994
##########
File path:
common/kvstore/src/main/java/org/apache/spark/util/kvstore/LevelDB.java
##########
@@ -164,35 +166,44 @@ public void writeAll(List<?> values) throws Exception {
Preconditions.checkArgument(values != null && !values.isEmpty(),
"Non-empty values required.");
- // Group by class, in case there are values from different classes in the
values
+ // Group by class, in case there are values from different classes in the
values.
// Typical usecase is for this to be a single class.
// A NullPointerException will be thrown if values contain null object.
for (Map.Entry<? extends Class<?>, ? extends List<?>> entry :
values.stream().collect(Collectors.groupingBy(Object::getClass)).entrySet()) {
-
- final Iterator<?> valueIter = entry.getValue().iterator();
- final Iterator<byte[]> serializedValueIter;
-
- // Deserialize outside synchronized block
- List<byte[]> list = new ArrayList<>(entry.getValue().size());
- for (Object value : values) {
- list.add(serializer.serialize(value));
- }
- serializedValueIter = list.iterator();
-
final Class<?> klass = entry.getKey();
- final LevelDBTypeInfo ti = getTypeInfo(klass);
- synchronized (ti) {
- final LevelDBTypeInfo.Index naturalIndex = ti.naturalIndex();
- final Collection<LevelDBTypeInfo.Index> indices = ti.indices();
+ // Partition the large value list to a set of smaller batches, to reduce
the memory
+ // pressure caused by serialization and give fairness to other writing
threads.
+ for (List<?> batchList : Iterables.partition(entry.getValue(), 128)) {
+ final Iterator<?> valueIter = batchList.iterator();
+ final Iterator<byte[]> serializedValueIter;
- try (WriteBatch batch = db().createWriteBatch()) {
- while (valueIter.hasNext()) {
- updateBatch(batch, valueIter.next(), serializedValueIter.next(),
klass,
- naturalIndex, indices);
+ // Deserialize outside synchronized block
+ List<byte[]> serializedValueList = new ArrayList<>(batchList.size());
+ for (Object value : batchList) {
+ serializedValueList.add(serializer.serialize(value));
+ }
+ serializedValueIter = serializedValueList.iterator();
+
+ final LevelDBTypeInfo ti = getTypeInfo(klass);
+ synchronized (ti) {
+ final LevelDBTypeInfo.Index naturalIndex = ti.naturalIndex();
+ final Collection<LevelDBTypeInfo.Index> indices = ti.indices();
+
+ try (WriteBatch batch = db().createWriteBatch()) {
+ // A hash map to update the delta of each countKey, wrap countKey
with type byte[]
+ // as ByteBuffer because ByteBuffer is comparable.
+ Map<ByteBuffer, Long> counts = new HashMap<>();
+ while (valueIter.hasNext()) {
+ updateBatch(batch, valueIter.next(), serializedValueIter.next(),
klass,
+ naturalIndex, indices, counts);
+ }
+ for (Map.Entry<ByteBuffer, Long> countEntry : counts.entrySet()) {
+ naturalIndex.updateCount(batch, countEntry.getKey().array(),
countEntry.getValue());
Review comment:
@HeartSaVioR Here I use naturalIndex.updateCount() to put the count
information of all indexes to the batch. When implementing this I found we can
lift the method **updateCount()** and **long getCount(byte[] key)** from
LevelDBTypeInfo.Index to LevelDBTypeInfo, as these methods are not accessing
any member of LevelDBTypeInfo.Index. Doing that would allow us to call
ti.updateCount() to update count for all indexes, which would make more sense.
However, it's totally optional.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]