benwtrent commented on code in PR #15169:
URL: https://github.com/apache/lucene/pull/15169#discussion_r2345352175


##########
lucene/core/src/java/org/apache/lucene/codecs/lucene104/Lucene104ScalarQuantizedVectorsFormat.java:
##########
@@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene104;
+
+import java.io.IOException;
+import java.util.Optional;
+import org.apache.lucene.codecs.hnsw.FlatVectorScorerUtil;
+import org.apache.lucene.codecs.hnsw.FlatVectorsFormat;
+import org.apache.lucene.codecs.hnsw.FlatVectorsReader;
+import org.apache.lucene.codecs.hnsw.FlatVectorsWriter;
+import org.apache.lucene.codecs.lucene99.Lucene99FlatVectorsFormat;
+import org.apache.lucene.index.SegmentReadState;
+import org.apache.lucene.index.SegmentWriteState;
+
+/**
+ * The quantization format used here is a per-vector optimized scalar 
quantization. These ideas are
+ * evolutions of LVQ proposed in <a 
href="https://arxiv.org/abs/2304.04759";>Similarity search in the
+ * blink of an eye with compressed indices</a> by Cecilia Aguerrebere et al., 
the previous work on
+ * globally optimized scalar quantization in Apache Lucene, and <a
+ * href="https://arxiv.org/abs/1908.10396";>Accelerating Large-Scale Inference 
with Anisotropic
+ * Vector Quantization </a> by Ruiqi Guo et. al. Also see {@link
+ * org.apache.lucene.util.quantization.OptimizedScalarQuantizer}. Some of key 
features are:
+ *
+ * <ul>
+ *   <li>Estimating the distance between two vectors using their centroid 
centered distance. This
+ *       requires some additional corrective factors, but allows for centroid 
centering to occur.
+ *   <li>Optimized scalar quantization to single bit level of centroid 
centered vectors.
+ *   <li>Asymmetric quantization of vectors, where query vectors are quantized 
to half-byte (4 bits)
+ *       precision (normalized to the centroid) and then compared directly 
against the single bit
+ *       quantized vectors in the index.
+ *   <li>Transforming the half-byte quantized query vectors in such a way that 
the comparison with
+ *       single bit vectors can be done with bit arithmetic.
+ * </ul>
+ *
+ * A previous work related to improvements over regular LVQ is <a
+ * href="https://arxiv.org/abs/2409.09913";>Practical and Asymptotically 
Optimal Quantization of
+ * High-Dimensional Vectors in Euclidean Space for Approximate Nearest 
Neighbor Search </a> by
+ * Jianyang Gao, et. al.
+ *
+ * <p>The format is stored within two files:
+ *
+ * <h2>.veq (vector data) file</h2>
+ *
+ * <p>Stores the quantized vectors in a flat format. Additionally, it stores 
each vector's
+ * corrective factors. At the end of the file, additional information is 
stored for vector ordinal
+ * to centroid ordinal mapping and sparse vector information.
+ *
+ * <ul>
+ *   <li>For each vector:
+ *       <ul>
+ *         <li><b>[byte]</b> the quantized values. Each dimension may be up to 
8 bits, and multiple
+ *             dimensions may be packed into a single byte.
+ *         <li><b>[float]</b> the optimized quantiles and an additional 
similarity dependent
+ *             corrective factor.
+ *         <li><b>[int]</b> the sum of the quantized components
+ *       </ul>
+ *   <li>After the vectors, sparse vector information keeping track of 
monotonic blocks.
+ * </ul>
+ *
+ * <h2>.vemq (vector metadata) file</h2>
+ *
+ * <p>Stores the metadata for the vectors. This includes the number of 
vectors, the number of
+ * dimensions, and file offset information.
+ *
+ * <ul>
+ *   <li><b>int</b> the field number
+ *   <li><b>int</b> the vector encoding ordinal
+ *   <li><b>int</b> the vector similarity ordinal
+ *   <li><b>vint</b> the vector dimensions
+ *   <li><b>vlong</b> the offset to the vector data in the .veq file
+ *   <li><b>vlong</b> the length of the vector data in the .veq file
+ *   <li><b>vint</b> the number of vectors
+ *   <li><b>vint</b> the wire number for ScalarEncoding
+ *   <li><b>[float]</b> the centroid
+ *   <li><b>float</b> the centroid square magnitude
+ *   <li>The sparse vector information, if required, mapping vector ordinal to 
doc ID
+ * </ul>
+ */
+public class Lucene104ScalarQuantizedVectorsFormat extends FlatVectorsFormat {
+  public static final String QUANTIZED_VECTOR_COMPONENT = "QVEC";
+  public static final String NAME = "Lucene104ScalarQuantizedVectorsFormat";
+
+  static final int VERSION_START = 0;
+  static final int VERSION_CURRENT = VERSION_START;
+  static final String META_CODEC_NAME = 
"Lucene104ScalarQuantizedVectorsFormatMeta";
+  static final String VECTOR_DATA_CODEC_NAME = 
"Lucene104ScalarQuantizedVectorsFormatData";
+  static final String META_EXTENSION = "vemq";
+  static final String VECTOR_DATA_EXTENSION = "veq";
+  static final int DIRECT_MONOTONIC_BLOCK_SHIFT = 16;
+
+  private static final FlatVectorsFormat rawVectorFormat =
+      new 
Lucene99FlatVectorsFormat(FlatVectorScorerUtil.getLucene99FlatVectorsScorer());
+
+  private static final Lucene104ScalarQuantizedVectorScorer scorer =
+      new 
Lucene104ScalarQuantizedVectorScorer(FlatVectorScorerUtil.getLucene99FlatVectorsScorer());
+
+  private final ScalarEncoding encoding;
+
+  /**
+   * Allowed encodings for scalar quantization.
+   *
+   * <p>This specifies how many bits are used per dimension and also dictates 
packing of dimensions
+   * into a byte stream.
+   */
+  public enum ScalarEncoding {
+    /** Each dimension is quantized to 8 bits and treated as an unsigned 
value. */
+    UNSIGNED_BYTE(0, (byte) 8),

Review Comment:
   for uniformity with the old format, could we also allow `7` bits?



##########
lucene/core/src/java/org/apache/lucene/codecs/lucene104/Lucene104ScalarQuantizedVectorScorer.java:
##########
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene104;
+
+import static org.apache.lucene.index.VectorSimilarityFunction.COSINE;
+import static org.apache.lucene.index.VectorSimilarityFunction.EUCLIDEAN;
+import static 
org.apache.lucene.index.VectorSimilarityFunction.MAXIMUM_INNER_PRODUCT;
+
+import java.io.IOException;
+import org.apache.lucene.codecs.hnsw.FlatVectorsScorer;
+import org.apache.lucene.index.KnnVectorValues;
+import org.apache.lucene.index.VectorSimilarityFunction;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.VectorUtil;
+import org.apache.lucene.util.hnsw.RandomVectorScorer;
+import org.apache.lucene.util.hnsw.RandomVectorScorerSupplier;
+import org.apache.lucene.util.hnsw.UpdateableRandomVectorScorer;
+import org.apache.lucene.util.quantization.OptimizedScalarQuantizer;
+
+/** Vector scorer over OptimizedScalarQuantized vectors */
+public class Lucene104ScalarQuantizedVectorScorer implements FlatVectorsScorer 
{
+  private final FlatVectorsScorer nonQuantizedDelegate;
+
+  public Lucene104ScalarQuantizedVectorScorer(FlatVectorsScorer 
nonQuantizedDelegate) {
+    this.nonQuantizedDelegate = nonQuantizedDelegate;
+  }
+
+  @Override
+  public RandomVectorScorerSupplier getRandomVectorScorerSupplier(
+      VectorSimilarityFunction similarityFunction, KnnVectorValues 
vectorValues)
+      throws IOException {
+    if (vectorValues instanceof QuantizedByteVectorValues qv) {
+      return new ScalarQuantizedVectorScorerSupplier(qv, similarityFunction);
+    }
+    // It is possible to get to this branch during initial indexing and flush
+    return 
nonQuantizedDelegate.getRandomVectorScorerSupplier(similarityFunction, 
vectorValues);
+  }
+
+  @Override
+  public RandomVectorScorer getRandomVectorScorer(
+      VectorSimilarityFunction similarityFunction, KnnVectorValues 
vectorValues, float[] target)
+      throws IOException {
+    if (vectorValues instanceof QuantizedByteVectorValues qv) {
+      OptimizedScalarQuantizer quantizer = qv.getQuantizer();
+      byte[] targetQuantized =
+          new byte
+              [OptimizedScalarQuantizer.discretize(
+                  target.length, 
qv.getScalarEncoding().getDimensionsPerByte())];
+      // We make a copy as the quantization process mutates the input
+      float[] copy = ArrayUtil.copyOfSubArray(target, 0, target.length);
+      if (similarityFunction == COSINE) {
+        VectorUtil.l2normalize(copy);
+      }
+      target = copy;
+      var targetCorrectiveTerms =
+          quantizer.scalarQuantize(
+              target, targetQuantized, qv.getScalarEncoding().getBits(), 
qv.getCentroid());
+      return new RandomVectorScorer.AbstractRandomVectorScorer(qv) {
+        @Override
+        public float score(int node) throws IOException {
+          return quantizedScore(
+              targetQuantized, targetCorrectiveTerms, qv, node, 
similarityFunction);
+        }
+      };
+    }
+    // It is possible to get to this branch during initial indexing and flush
+    return nonQuantizedDelegate.getRandomVectorScorer(similarityFunction, 
vectorValues, target);
+  }
+
+  @Override
+  public RandomVectorScorer getRandomVectorScorer(
+      VectorSimilarityFunction similarityFunction, KnnVectorValues 
vectorValues, byte[] target)
+      throws IOException {
+    return nonQuantizedDelegate.getRandomVectorScorer(similarityFunction, 
vectorValues, target);
+  }
+
+  @Override
+  public String toString() {
+    return "Lucene104ScalarQuantizedVectorScorer(nonQuantizedDelegate="
+        + nonQuantizedDelegate
+        + ")";
+  }
+
+  private static final class ScalarQuantizedVectorScorerSupplier
+      implements RandomVectorScorerSupplier {
+    private final QuantizedByteVectorValues targetValues;
+    private final QuantizedByteVectorValues values;
+    private final VectorSimilarityFunction similarity;
+
+    public ScalarQuantizedVectorScorerSupplier(
+        QuantizedByteVectorValues values, VectorSimilarityFunction similarity) 
throws IOException {
+      this.targetValues = values.copy();
+      this.values = values;
+      this.similarity = similarity;
+    }
+
+    @Override
+    public UpdateableRandomVectorScorer scorer() throws IOException {
+      return new 
UpdateableRandomVectorScorer.AbstractUpdateableRandomVectorScorer(values) {
+        private byte[] targetVector;
+        private OptimizedScalarQuantizer.QuantizationResult 
targetCorrectiveTerms;
+
+        @Override
+        public float score(int node) throws IOException {
+          return quantizedScore(targetVector, targetCorrectiveTerms, values, 
node, similarity);
+        }
+
+        @Override
+        public void setScoringOrdinal(int node) throws IOException {
+          var rawTargetVector = targetValues.vectorValue(node);
+          switch (values.getScalarEncoding()) {
+            case UNSIGNED_BYTE -> targetVector = rawTargetVector;
+            case PACKED_NIBBLE -> {
+              if (targetVector == null) {
+                targetVector = new 
byte[OptimizedScalarQuantizer.discretize(values.dimension(), 2)];
+              }
+              
OffHeapScalarQuantizedVectorValues.unpackNibbles(rawTargetVector, targetVector);
+            }
+          }
+          targetCorrectiveTerms = targetValues.getCorrectiveTerms(node);
+        }
+      };
+    }
+
+    @Override
+    public RandomVectorScorerSupplier copy() throws IOException {
+      return new ScalarQuantizedVectorScorerSupplier(values.copy(), 
similarity);
+    }
+  }
+
+  private static final float[] SCALE_LUT =
+      new float[] {
+        1f,
+        1f / ((1 << 2) - 1),
+        1f / ((1 << 3) - 1),
+        1f / ((1 << 4) - 1),
+        1f / ((1 << 5) - 1),
+        1f / ((1 << 6) - 1),
+        1f / ((1 << 7) - 1),
+        1f / ((1 << 8) - 1),
+      };
+
+  private static float quantizedScore(
+      byte[] quantizedQuery,
+      OptimizedScalarQuantizer.QuantizationResult queryCorrections,
+      QuantizedByteVectorValues targetVectors,
+      int targetOrd,
+      VectorSimilarityFunction similarityFunction)
+      throws IOException {
+    var scalarEncoding = targetVectors.getScalarEncoding();
+    byte[] quantizedDoc = targetVectors.vectorValue(targetOrd);
+    float qcDist =
+        switch (scalarEncoding) {
+          case UNSIGNED_BYTE -> VectorUtil.uint8DotProduct(quantizedQuery, 
quantizedDoc);
+          case PACKED_NIBBLE -> 
VectorUtil.int4DotProductPacked(quantizedQuery, quantizedDoc);
+        };
+    OptimizedScalarQuantizer.QuantizationResult indexCorrections =
+        targetVectors.getCorrectiveTerms(targetOrd);
+    float scale = SCALE_LUT[scalarEncoding.getBits() - 1];
+    float x1 = indexCorrections.quantizedComponentSum();
+    float ax = indexCorrections.lowerInterval();
+    // Here we assume `lx` is simply bit vectors, so the scaling isn't 
necessary

Review Comment:
   ```suggestion
       // Here we must scale according to the bits
   ```



##########
lucene/core/src/java/org/apache/lucene/codecs/lucene104/Lucene104ScalarQuantizedVectorsWriter.java:
##########
@@ -0,0 +1,858 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.codecs.lucene104;
+
+import static 
org.apache.lucene.codecs.lucene104.Lucene104ScalarQuantizedVectorsFormat.DIRECT_MONOTONIC_BLOCK_SHIFT;
+import static 
org.apache.lucene.codecs.lucene104.Lucene104ScalarQuantizedVectorsFormat.QUANTIZED_VECTOR_COMPONENT;
+import static org.apache.lucene.index.VectorSimilarityFunction.COSINE;
+import static org.apache.lucene.search.DocIdSetIterator.NO_MORE_DOCS;
+import static org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.apache.lucene.codecs.CodecUtil;
+import org.apache.lucene.codecs.KnnVectorsReader;
+import org.apache.lucene.codecs.hnsw.FlatFieldVectorsWriter;
+import org.apache.lucene.codecs.hnsw.FlatVectorsWriter;
+import 
org.apache.lucene.codecs.lucene104.Lucene104ScalarQuantizedVectorsFormat.ScalarEncoding;
+import org.apache.lucene.codecs.lucene95.OrdToDocDISIReaderConfiguration;
+import org.apache.lucene.codecs.perfield.PerFieldKnnVectorsFormat;
+import org.apache.lucene.index.DocsWithFieldSet;
+import org.apache.lucene.index.FieldInfo;
+import org.apache.lucene.index.FloatVectorValues;
+import org.apache.lucene.index.IndexFileNames;
+import org.apache.lucene.index.KnnVectorValues;
+import org.apache.lucene.index.MergeState;
+import org.apache.lucene.index.SegmentWriteState;
+import org.apache.lucene.index.Sorter;
+import org.apache.lucene.index.VectorEncoding;
+import org.apache.lucene.index.VectorSimilarityFunction;
+import org.apache.lucene.internal.hppc.FloatArrayList;
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.search.VectorScorer;
+import org.apache.lucene.store.IndexInput;
+import org.apache.lucene.store.IndexOutput;
+import org.apache.lucene.util.IOUtils;
+import org.apache.lucene.util.VectorUtil;
+import org.apache.lucene.util.hnsw.CloseableRandomVectorScorerSupplier;
+import org.apache.lucene.util.hnsw.RandomVectorScorerSupplier;
+import org.apache.lucene.util.hnsw.UpdateableRandomVectorScorer;
+import org.apache.lucene.util.quantization.OptimizedScalarQuantizer;
+
+/** Copied from Lucene, replace with Lucene's implementation sometime after 
Lucene 10 */
+public class Lucene104ScalarQuantizedVectorsWriter extends FlatVectorsWriter {
+  private static final long SHALLOW_RAM_BYTES_USED =
+      shallowSizeOfInstance(Lucene104ScalarQuantizedVectorsWriter.class);
+
+  private final SegmentWriteState segmentWriteState;
+  private final List<FieldWriter> fields = new ArrayList<>();
+  private final IndexOutput meta, vectorData;
+  private final ScalarEncoding encoding;
+  private final FlatVectorsWriter rawVectorDelegate;
+  private final Lucene104ScalarQuantizedVectorScorer vectorsScorer;
+  private boolean finished;
+
+  /**
+   * Sole constructor
+   *
+   * @param vectorsScorer the scorer to use for scoring vectors
+   */
+  protected Lucene104ScalarQuantizedVectorsWriter(
+      SegmentWriteState state,
+      ScalarEncoding encoding,
+      FlatVectorsWriter rawVectorDelegate,
+      Lucene104ScalarQuantizedVectorScorer vectorsScorer)
+      throws IOException {
+    super(vectorsScorer);
+    this.encoding = encoding;
+    this.vectorsScorer = vectorsScorer;
+    this.segmentWriteState = state;
+    String metaFileName =
+        IndexFileNames.segmentFileName(
+            state.segmentInfo.name,
+            state.segmentSuffix,
+            Lucene104ScalarQuantizedVectorsFormat.META_EXTENSION);
+
+    String vectorDataFileName =
+        IndexFileNames.segmentFileName(
+            state.segmentInfo.name,
+            state.segmentSuffix,
+            Lucene104ScalarQuantizedVectorsFormat.VECTOR_DATA_EXTENSION);
+    this.rawVectorDelegate = rawVectorDelegate;
+    try {
+      meta = state.directory.createOutput(metaFileName, state.context);
+      vectorData = state.directory.createOutput(vectorDataFileName, 
state.context);
+
+      CodecUtil.writeIndexHeader(
+          meta,
+          Lucene104ScalarQuantizedVectorsFormat.META_CODEC_NAME,
+          Lucene104ScalarQuantizedVectorsFormat.VERSION_CURRENT,
+          state.segmentInfo.getId(),
+          state.segmentSuffix);
+      CodecUtil.writeIndexHeader(
+          vectorData,
+          Lucene104ScalarQuantizedVectorsFormat.VECTOR_DATA_CODEC_NAME,
+          Lucene104ScalarQuantizedVectorsFormat.VERSION_CURRENT,
+          state.segmentInfo.getId(),
+          state.segmentSuffix);
+    } catch (Throwable t) {
+      IOUtils.closeWhileSuppressingExceptions(t, this);
+      throw t;
+    }
+  }
+
+  @Override
+  public FlatFieldVectorsWriter<?> addField(FieldInfo fieldInfo) throws 
IOException {
+    FlatFieldVectorsWriter<?> rawVectorDelegate = 
this.rawVectorDelegate.addField(fieldInfo);
+    if (fieldInfo.getVectorEncoding().equals(VectorEncoding.FLOAT32)) {
+      @SuppressWarnings("unchecked")
+      FieldWriter fieldWriter =
+          new FieldWriter(fieldInfo, (FlatFieldVectorsWriter<float[]>) 
rawVectorDelegate);
+      fields.add(fieldWriter);
+      return fieldWriter;
+    }
+    return rawVectorDelegate;
+  }
+
+  @Override
+  public void flush(int maxDoc, Sorter.DocMap sortMap) throws IOException {
+    rawVectorDelegate.flush(maxDoc, sortMap);
+    for (FieldWriter field : fields) {
+      // after raw vectors are written, normalize vectors for clustering and 
quantization
+      if (VectorSimilarityFunction.COSINE == 
field.fieldInfo.getVectorSimilarityFunction()) {
+        field.normalizeVectors();
+      }
+      final float[] clusterCenter;
+      int vectorCount = field.flatFieldVectorsWriter.getVectors().size();
+      clusterCenter = new float[field.dimensionSums.length];
+      if (vectorCount > 0) {
+        for (int i = 0; i < field.dimensionSums.length; i++) {
+          clusterCenter[i] = field.dimensionSums[i] / vectorCount;
+        }
+        if (VectorSimilarityFunction.COSINE == 
field.fieldInfo.getVectorSimilarityFunction()) {
+          VectorUtil.l2normalize(clusterCenter);
+        }
+      }
+      if (segmentWriteState.infoStream.isEnabled(QUANTIZED_VECTOR_COMPONENT)) {
+        segmentWriteState.infoStream.message(
+            QUANTIZED_VECTOR_COMPONENT, "Vectors' count:" + vectorCount);
+      }
+      OptimizedScalarQuantizer quantizer =
+          new 
OptimizedScalarQuantizer(field.fieldInfo.getVectorSimilarityFunction());
+      if (sortMap == null) {
+        writeField(field, clusterCenter, maxDoc, quantizer);
+      } else {
+        writeSortingField(field, clusterCenter, maxDoc, sortMap, quantizer);
+      }
+      field.finish();
+    }
+  }
+
+  private void writeField(
+      FieldWriter fieldData, float[] clusterCenter, int maxDoc, 
OptimizedScalarQuantizer quantizer)
+      throws IOException {
+    // write vector values
+    long vectorDataOffset = vectorData.alignFilePointer(Float.BYTES);
+    writeVectors(fieldData, clusterCenter, quantizer);
+    long vectorDataLength = vectorData.getFilePointer() - vectorDataOffset;
+    float centroidDp =
+        !fieldData.getVectors().isEmpty() ? 
VectorUtil.dotProduct(clusterCenter, clusterCenter) : 0;

Review Comment:
   ```suggestion
       float centroidDp =
           fieldData.getVectors().isEmpty() ? 0 : 
VectorUtil.dotProduct(clusterCenter, clusterCenter);
   ```
   not a big deal, but I have found `!` or negatives in general to be rife with 
silly bugs. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to