bruno-roustant commented on code in PR #1466:
URL: https://github.com/apache/solr/pull/1466#discussion_r1144360207
##########
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java:
##########
@@ -1265,15 +1260,13 @@ protected Elevation mergeWith(Elevation elevation) {
return this;
}
Set<BytesRef> elevatedIds =
- ImmutableSet.<BytesRef>builder()
- .addAll(this.elevatedIds)
- .addAll(elevation.elevatedIds)
- .build();
+ Stream.concat(this.elevatedIds.stream(),
elevation.elevatedIds.stream())
+ .collect(Collectors.toCollection(LinkedHashSet::new));
boolean overlappingElevatedIds =
elevatedIds.size() != (this.elevatedIds.size() +
elevation.elevatedIds.size());
BooleanQuery.Builder includeQueryBuilder = new BooleanQuery.Builder();
Set<BooleanClause> clauseSet =
- (overlappingElevatedIds ?
Sets.newHashSetWithExpectedSize(elevatedIds.size()) : null);
+ (overlappingElevatedIds ? new HashSet<>(elevatedIds.size()) : null);
Review Comment:
We should use org.apache.lucene.util.CollectionUtil.newHashSet(int).
(and later with JDK 19 HashSet.newHashSet)
##########
solr/core/src/java/org/apache/solr/cloud/SizeLimitedDistributedMap.java:
##########
@@ -77,7 +77,7 @@ protected boolean lessThan(Long a, Long b) {
}
};
- Map<String, Long> childToModificationZxid =
Maps.newHashMapWithExpectedSize(children.size());
+ Map<String, Long> childToModificationZxid = new
HashMap<>(children.size());
Review Comment:
This is not equivalent to Maps.newHashMapWithExpectedSize.
In this HashMap constructor, the parameter is the initial capacity. But we
should take into account the load factor (0.75) on this capacity. For example
if children.size() == 10, then the HashMap will resize (and rehash) when it
contains >= 7.5 entries; this means it will always resize.
Maps.newHashMapWithExpectedSize takes that into account to set the Map
initial capacity to (children.size() / 0.75) + 1, ensuring an optimal capacity
and no rehash during the construction.
We can use org.apache.lucene.util.CollectionUtil.newHashMap(int size).
Later with JDK 19, we will be able to use HashMap.newHashMap(int
numMappings).
##########
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java:
##########
@@ -1558,7 +1556,7 @@ Solution is q(k) = 1/2 (k^2+k+2)
public Builder<E, M> addSubset(Collection<E> subset, M matchValue) {
if (!subset.isEmpty()) {
TrieSubsetMatcher.Node<E, M> node = root;
- for (E e : ImmutableSortedSet.copyOf(subset)) {
+ for (E e : Collections.unmodifiableSortedSet(new TreeSet<>(subset)))
{
Review Comment:
We can remove the immutability. I chose that originally just for performance
with the Guava collection.
Here what is important is to 1- remove duplicates and 2- sort. Using a
TreeSet corresponds, my only concern is the poor performance of TreeSet
iteration. But that's ok.
##########
solr/core/src/java/org/apache/solr/handler/component/QueryElevationComponent.java:
##########
@@ -1590,10 +1587,10 @@ public int getSubsetCount() {
* Returns an iterator over all the subsets that are contained by the
provided set. The returned
* iterator does not support removal.
*
- * @param set This set is copied to a new {@link ImmutableSortedSet} with
natural ordering.
+ * @param set This set is copied to a new {@link SortedSet} with natural
ordering.
*/
public Iterator<M> findSubsetsMatching(Collection<E> set) {
- return new MatchIterator(ImmutableSortedSet.copyOf(set));
+ return new MatchIterator(Collections.unmodifiableSortedSet(new
TreeSet<>(set)));
Review Comment:
Immutability can be removed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]