[ https://issues.apache.org/jira/browse/LUCENE-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12752931#action_12752931 ]
Nadav Har'El commented on LUCENE-1899: -------------------------------------- Hi Shai, I guess you're right that if there's such a utility function, we should probably use it. I just wonder what is the rationale behind the specific formula in this function - basically newsize = oldsize * 1.125 + 6. This formula ensures that at worst case, just 6% of the array space is wasted (instead of 50% in the doubling approach), but the number of reallocations and copies is 8 times higher, and performance is proportionally slower (although obviously, both are linear in amortized time - which the current code isn't). Was there any thought given to why the factor 0.125 is better than 0.25, 0.5, 0.01 or 1.0? I'm not saying that 1.0 (doubling) is best, just that I don't know why 0.125 is. > Inefficient growth of OpenBitSet > -------------------------------- > > Key: LUCENE-1899 > URL: https://issues.apache.org/jira/browse/LUCENE-1899 > Project: Lucene - Java > Issue Type: Bug > Components: Store > Affects Versions: 2.9 > Reporter: Nadav Har'El > Priority: Minor > > Hi, I found a potentially serious efficiency problem with OpenBitSet. > One typical (I think) way to build a bit set is to set() the bits one by one - > e.g., have a HitCollector set() the bit for each matching document. > The underlying array of longs needs to grow as more as more bits are set, of > course. > But looking at the code, it appears to me that the array grows very > ineefficiently - in the worst case (when doc ids are sorted, as they would > normally be in the HitCollector case for example), copying the array again > and again for every added bit... The relevant code in OpenBitSet.java is: > public void set(long index) { > int wordNum = expandingWordNum(index); > ... > } > protected int expandingWordNum(long index) { > int wordNum = (int)(index >> 6); > if (wordNum>=wlen) { > ensureCapacity(index+1); > ... > } > public void ensureCapacityWords(int numWords) { > if (bits.length < numWords) { > long[] newBits = new long[numWords]; > System.arraycopy(bits,0,newBits,0,wlen); > bits = newBits; > } > } > As you can see, if the bits array is not long enough, a new one is > allocated at exactly the right size - and in the worst case it can grow > just one word every time... > Shouldn't the growth be more exponential in nature, e.g., grow to the maximum > of index+1 and twice the existing size? > Alternatively, if the growth is so inefficient, this should be documented, > and it should be recommended to use the variant of the constructor with the > correct initial size (e.g., in the HitCollector case, the number of documents > in the index). and the fastSet() method instead of set(). > Thanks, > Nadav. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org