[
https://issues.apache.org/jira/browse/CARBONDATA-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15401990#comment-15401990
]
ASF GitHub Bot commented on CARBONDATA-80:
------------------------------------------
Github user kumarvishal09 commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/44#discussion_r72972716
--- Diff:
core/src/main/java/org/carbondata/core/cache/dictionary/ColumnDictionaryChunkWrapper.java
---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.carbondata.core.cache.dictionary;
+
+import java.nio.ByteBuffer;
+import java.util.Iterator;
+import java.util.List;
+
+import org.carbondata.format.ColumnDictionaryChunk;
+
+/**
+ * This class is a wrapper over column dictionary chunk thrift object.
+ * The wrapper class wraps the list<ColumnDictionaryChunk> and provides an
API
+ * to fill the byte array into list
+ */
+public class ColumnDictionaryChunkWrapper implements Iterator<byte[]> {
+
+ /**
+ * list of dictionaryChunks
+ */
+ private List<ColumnDictionaryChunk> columnDictionaryChunks;
+
+ /**
+ * size of the list
+ */
+ private int size;
+
+ /**
+ * Current index of the list
+ */
+ private int currentSize;
+
+ /**
+ * variable holds the count of elements already iterated
+ */
+ private int iteratorIndex;
+
+ /**
+ * variable holds the current index of List<List<byte[]>> being traversed
+ */
+ private int outerIndex;
+
+ /**
+ * Constructor of ColumnDictionaryChunkWrapper
+ *
+ * @param columnDictionaryChunks
+ */
+ public ColumnDictionaryChunkWrapper(List<ColumnDictionaryChunk>
columnDictionaryChunks) {
+ this.columnDictionaryChunks = columnDictionaryChunks;
+ for (ColumnDictionaryChunk dictionaryChunk : columnDictionaryChunks) {
+ this.size += dictionaryChunk.getValues().size();
+ }
+ }
+
+ /**
+ * Returns {@code true} if the iteration has more elements.
+ * (In other words, returns {@code true} if {@link #next} would
+ * return an element rather than throwing an exception.)
+ *
+ * @return {@code true} if the iteration has more elements
+ */
+ @Override public boolean hasNext() {
+ return (currentSize < size);
+ }
+
+ /**
+ * Returns the next element in the iteration.
+ * The method pics the next elements from the first inner list till
first is not finished, pics
+ * the second inner list ...
+ *
+ * @return the next element in the iteration
+ */
+ @Override public byte[] next() {
+ if (iteratorIndex >=
columnDictionaryChunks.get(outerIndex).getValues().size()) {
+ iteratorIndex = 0;
+ outerIndex++;
+ }
+ ByteBuffer buffer =
columnDictionaryChunks.get(outerIndex).getValues().get(iteratorIndex);
+ int length = buffer.limit();
+ byte[] value = new byte[length];
+ buffer.get(value, 0, value.length);
--- End diff --
Can we use buffer.array instead of calling get as get will copy the data so
we can avoid System.arrayCopy
> Dictionary values should be equally distributed in buckets while loading in
> memory
> ----------------------------------------------------------------------------------
>
> Key: CARBONDATA-80
> URL: https://issues.apache.org/jira/browse/CARBONDATA-80
> Project: CarbonData
> Issue Type: Improvement
> Reporter: Manish Gupta
> Assignee: Manish Gupta
> Priority: Minor
>
> Whenever a query is executed, dictionary for columns queried is loaded in
> memory. For incremental loads dictionary values are loaded incrementally and
> thus one list contains several sub lists with dictionary values.
> The dictionary values on incremental load may not be equally distributed in
> the sub buckets and this might increase the search time of a value if there
> are too many incremental loads.
> Therefore the dictionary values should be divided equally in the sub buckets.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)