StefanRRichter commented on code in PR #22788:
URL: https://github.com/apache/flink/pull/22788#discussion_r1235422689
##########
flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java:
##########
@@ -133,4 +140,71 @@ public static <K, V> Map<K, V> map(Map.Entry<K, V>...
entries) {
}
return Collections.unmodifiableMap(map);
}
+
+ /**
+ * Creates a new {@link HashMap} of the expected size, i.e. a hash map
that will not rehash if
+ * expectedSize many keys are inserted, considering the load factor.
+ *
+ * @param expectedSize the expected size of the created hash map.
+ * @return a new hash map instance with enough capacity for the expected
size.
+ * @param <K> the type of keys maintained by this map.
+ * @param <V> the type of mapped values.
+ */
+ public static <K, V> HashMap<K, V> newHashMapWithExpectedSize(int
expectedSize) {
+ return new HashMap<>(computeRequiredCapacity(expectedSize),
HASH_MAP_DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Creates a new {@link LinkedHashMap} of the expected size, i.e. a hash
map that will not
+ * rehash if expectedSize many keys are inserted, considering the load
factor.
+ *
+ * @param expectedSize the expected size of the created hash map.
+ * @return a new hash map instance with enough capacity for the expected
size.
+ * @param <K> the type of keys maintained by this map.
+ * @param <V> the type of mapped values.
+ */
+ public static <K, V> LinkedHashMap<K, V>
newLinkedHashMapWithExpectedSize(int expectedSize) {
+ return new LinkedHashMap<>(
+ computeRequiredCapacity(expectedSize),
HASH_MAP_DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Creates a new {@link HashSet} of the expected size, i.e. a hash set
that will not rehash if
+ * expectedSize many unique elements are inserted, considering the load
factor.
+ *
+ * @param expectedSize the expected size of the created hash map.
+ * @return a new hash map instance with enough capacity for the expected
size.
+ * @param <E> the type of elements stored by this set.
+ */
+ public static <E> HashSet<E> newHashSetWithExpectedSize(int expectedSize) {
+ return new HashSet<>(computeRequiredCapacity(expectedSize),
HASH_MAP_DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Creates a new {@link LinkedHashSet} of the expected size, i.e. a hash
set that will not
+ * rehash if expectedSize many unique elements are inserted, considering
the load factor.
+ *
+ * @param expectedSize the expected size of the created hash map.
+ * @return a new hash map instance with enough capacity for the expected
size.
+ * @param <E> the type of elements stored by this set.
+ */
+ public static <E> LinkedHashSet<E> newLinkedHashSetWithExpectedSize(int
expectedSize) {
+ return new LinkedHashSet<>(
+ computeRequiredCapacity(expectedSize),
HASH_MAP_DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Helper method to compute the right capacity for a hash map with load
factor
+ * HASH_MAP_DEFAULT_LOAD_FACTOR.
+ */
+ @VisibleForTesting
+ static int computeRequiredCapacity(int expectedSize) {
Review Comment:
It's a private method and would always be called for the same value. And the
static load factor is also passed into the map constructors, so I don't fully
understand that part.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]