Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/12055#discussion_r57978007
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/AggregateHashMap.java
---
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.vectorized;
+
+import java.util.Arrays;
+
+import org.apache.spark.memory.MemoryMode;
+import org.apache.spark.sql.types.StructType;
+
+import static org.apache.spark.sql.types.DataTypes.LongType;
+
+/**
+ * This is an illustrative implementation of an append-only
single-key/single value aggregate hash
+ * map that can act as a 'cache' for extremely fast key-value lookups
while evaluating aggregates
+ * (and fall back to the `BytesToBytesMap` if a given key isn't found).
This can be potentially
+ * 'codegened' in TungstenAggregate to speed up aggregates w/ key.
+ *
+ * It is backed by a power-of-2-sized array for index lookups and a
columnar batch that stores the
+ * key-value pairs. The index lookups in the array rely on linear probing
(with a small number of
+ * maximum tries) and use an inexpensive hash function which makes it
really efficient for a
+ * majority of lookups. However, using linear probing and an inexpensive
hash function also makes it
+ * less robust as compared to the `BytesToBytesMap` (especially for a
large number of keys or even
+ * for certain distribution of keys) and requires us to fall back on the
latter for correctness.
+ */
+public class AggregateHashMap {
+ public ColumnarBatch batch;
+ public int[] buckets;
+
+ private int numBuckets;
+ private int numRows = 0;
+ private int maxSteps = 3;
+
+ private static int DEFAULT_NUM_BUCKETS = 65536 * 4;
--- End diff --
by `capacity` I was implying `numBuckets` instead of the capacity of the
batch, but I yes, the latter makes more sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]